Meta, the parent company of Facebook and Instagram, has been scrutinized for using Australian users’ data to train its AI systems. During a recent Senate inquiry, the tech giant confirmed that publicly shared posts, including images and captions, dating back to 2007 are being ingested to develop AI models such as LLaMA and Meta AI. While this practice has stirred privacy concerns, particularly regarding the use of content shared by children, Meta executives defended their approach, highlighting the importance of Australian data for creating accurate and culturally relevant AI tools.
Data Usage and Privacy Implications
Table of Contents
Meta’s collection of public posts to fuel its AI models is at the heart of the controversy. According to Meta’s privacy policy, publicly shared content from Facebook and Instagram users is utilized to improve AI technologies. The company’s Asia Pacific public policy vice president, Simon Milner, emphasized that Australian data helps reduce biases in AI systems by incorporating local cultural diversity. This data is essential to ensure that the AI outputs reflect the uniqueness of Australian society.
However, the inquiry revealed that this practice extends to photos of children, provided they were shared by adults. Melinda Claybaugh, Meta’s global director of privacy policy, clarified that while Meta does not use data from accounts of minors, images of children shared publicly by adults are fair game. This revelation has raised red flags among privacy advocates, who question whether users fully understand the extent to which their data is being utilized.
Limited Opt-Out Options for Australians
One of the most contentious aspects of Meta’s data collection is the lack of an opt-out option for Australian users. In contrast to Europe, where the General Data Protection Regulation (GDPR) provides robust privacy protections, Australians have no formal way to prevent their data from being used to train AI systems. Meta offers European users the choice to opt out of their data being used for AI training due to the region’s stricter privacy regulations. However, Meta has made no plans to extend this option to Australians.
Users in Australia can adjust their privacy settings to prevent their content from being publicly accessible, but this workaround doesn’t offer the same level of control as the opt-out option available in Europe. According to Milner, balancing privacy concerns with technological innovation is complex, but asking users to opt in or out of data collection could hinder user experience and lead to frustration.
Legal and Ethical Questions
The Senate inquiry, which explored the broader implications of AI development, raised concerns about whether Meta’s data collection practices are in line with Australian privacy laws. Under Australia’s Privacy Act, organizations are required to obtain consent when using personal information for purposes other than those for which it was originally collected. The Office of the Australian Information Commissioner (OAIC) has expressed its intention to engage with Meta to ensure that the company’s AI development adheres to privacy laws.
Meta, however, argues that using publicly available information to train AI models is an industry-wide practice and that it has taken steps to ensure data is used responsibly. For example, Meta stated that it does not use private posts, messages, or content from underage users to train its AI. The company also emphasized that publicly available information from third-party sources is licensed when used to train AI(Information Age).
The Global Context of AI Training
Meta’s data practices are not unique to Australia; the company has faced similar criticisms in Europe and the United States. In May 2024, Meta announced plans to use public posts from European users to train AI systems, but the rollout was paused after privacy watchdogs raised concerns. European privacy advocates, including the organization Noyb, argued that Meta’s system should be opt-in rather than opt-out and that once data is used to train AI, it cannot be removed under the “right to be forgotten
In Australia, privacy concerns are heightened due to the lack of strong legal safeguards similar to those in Europe. Australian users may feel left behind, as Meta’s global privacy standards don’t fully apply to them. The Australian government has yet to implement recommendations from a review of the Privacy Act, leaving users with limited control over how their data is used.
The Future of AI and Privacy
As AI technologies continue to advance, the ethical implications of data usage will remain a hot topic. Meta’s use of Australian data to train its AI systems has sparked a debate over privacy, consent, and the balance between innovation and user rights. The Australian Senate inquiry is expected to release a report that may influence future regulations on AI and data privacy in the country.
In the meantime, users concerned about their privacy can adjust their settings to limit what is publicly shared, though this solution falls short of a comprehensive opt-out option. As AI becomes increasingly integrated into everyday life, questions around data usage and privacy will likely intensify, forcing governments, tech companies, and users to navigate the complex landscape of digital rights and responsibilities.










