@corbin not true. Precautions include rate limiting, prohibitions on scraping in the terms of service, and machine learning to detect bot-like behavior and force captchas before consuming more content, all of which Facebook uses on its own sites. Could also block or limit Facebook-owned IP ranges.
@scott @corbin Yeah, I would second this to say it really, really isn't true. Of course anyone can break the rules of any website by scraping it but there are technical ways to recognise and limit that. And it certainly doesn't mean we should just leave doors wide open and not bother making very clear that unauthorized collection of data is not an acceptable use of the service. That people can break rules doesn't mean there's no point having rules.