We have a mobile site that generates pages using a remote script that makes a bunch of API calls. Consequently, when Googlebot crawls these pages, it sees a blank. It can pick up the title, but nothing at all in the body.
One of our developers pointed me to a site that advertises a service for getting around this kind of issue. They use a headless browser to download the pages and fully render them, then they save the source code of the rendered pages on their server. When a bot requests the page from us, we use a proxy to feed them the pre-rendered version of the page.
Sound like cloaking to you? It does to me. After all, we'd be sniffing specifically for search engine spiders. According to the company that provides this service, it's not cloaking, because even though you're sniffing for the bot, you're presenting the bot with the same exact page a user sees. They go on to say that they're doing exactly what Google recommends on their page about making AJAX applications crawlable. That makes sense to me, but I'm not 100% convinced this won't get us in trouble, mostly because the technology is over my head, so I can't really be sure that what I'm reading is to be trusted.