Facebook whistleblower Frances Haugen gave sophisticated testimony in the European Parliament after a previous session in front of British and US MPs.
Mr. Hogen’s core message was as serious a warning as it was issued on both sides of the Atlantic Ocean. “Facebook has chosen to prioritize profits over security and ignore the amplification of harmful content that adversely affects individuals, society and democracy, and European regulators control platforms that operate these irresponsibly. But it’s essential to be responsible, and there’s no time for legislators to impose restrictions on social media. “
Hogen, Facebook’s whistleblower (to date), has received a very positive response from the European Parliament. Parliamentarians thanked him for taking the time and publicly expressing his concerns, which they describe as his “courage.” He also praised him before he spoke and at the end of the three-hour presentation and Q & A session.
Lawmakers asked him about various issues. The greatest concern was how the newly introduced EU-wide digital regulations could maximize effective transparency and accountability for volatile platform giants. was.
The Digital Services Act (DSA) is in front of the intellect of members of the European Parliament. Amendments to the European Commission’s proposal are being considered and voted on, and the bill could change significantly in the process.
In response to some lawmakers calling for a total ban on behavioral advertising, privacy-friendly alternatives such as contextual advertising have been included in the bill. Alternatively, another recently popular amendment calls for the news media to be excluded from the removal of platform content.
When I opened the lid, I found that Mr. Hogen was not in favor of these amendments. However, he said he generally supports the regulation.
The general point of DSA is to provide a reliable and secure online environment. And many European Parliamentarians who spoke during today‘s session have given the EU horn-blown street speech opportunity in response to the worldwide attention that Mr. Hogen speaks in the European Parliament. I caught it. In the midst of another (further) publicity crisis on Facebook, it signals that digital regulation is not only under discussion, but is in a progressive state of rapid progress towards adoption.
Facebook whistleblowers were willing to meet the political ego. He said he was “thankful” for the EU’s serious commitment to platform regulation, suggesting that the EU has the opportunity to set a “global gold standard” through the DSA.
That said, he used a similar phrase in the British Parliament during another deliberations on evidence in October 2021. So he made the same slogan about the country’s online safety legislation.
Hogen reiterated to European Parliamentarians that Facebook was extraordinarily good at “using data to cheat”, reiterating what’s happening on Facebook’s platform. It impressed lawmakers that they should not pass a lenient law that simply requires the submission of data. Rather, Facebook should be asked to explain all of the datasets it passes, down to the details of the queries it uses to pull data and generate oversight audits.
If we don’t take these steps in legislation, there will be a big loophole in the EU’s new digital rules, and Facebook will execute all the queries necessary to draw a checkmarked composition and selectively self-interest. Hogen warned that providing the data would help.
He recommends that regulations need to be multi-layered, dynamic and continuous input from a broad ecosystem of civil society organizations and external researchers for regulation to work on unreliable platforms like Facebook. bottom. This is to keep track of the new harm and ensure that the law is functioning as intended.
In order to truly fulfill the accountability required for the impact of AI, we will provide platform data not only to the “examined academics” currently proposed by DSA, but also to external experts in a wider range of fields. By doing so, he urged that he should have a broader perspective on surveillance.
“It’s clear that Facebook is false with data,” he told the European Parliament. “We encourage the adoption of DSA. When providing data, Facebook needs to show how it was obtained.[Omitted]It is extremely important to disclose the processes, queries and notes used to extract the data. You cannot trust the information provided unless you can confirm. “
Mr. Hogen did more than just issue a warning. He added more compliments and told European Parliamentarians: “We strongly believe that Europe will play an important role in regulating these platforms, as it is a vibrant, linguistically diverse democracy.”
“Obtaining DSA rights for 450 million EU citizens, both linguistically and ethnically diverse, can create a game changer for the world. Assessing social risks to business operations. By mandating each platform, the decision on what product to build and how to build it will no longer be based solely on maximizing profits. Systematic rules and standards for dealing with risks while protecting freedom of speech. It will be able to establish and show the world how transparency, oversight and enforcement should work. “
“The platform must clarify what safety systems it has, what languages these safety systems support, and the performance of each language. Ensure this. That is a serious and urgent need, “he continued, embodying his claim to the need for comprehensive information disclosure. “Honestly, is this dangerous for the majority of Europeans?”
According to Hogen, such an approach pushes Facebook to a “language-independent content-neutral solution” needed to address the negative impacts of all markets and languages on which the platform runs. It is said that it will bring benefits on a scale that exceeds.
Facebook’s (limited) safety budget bias, that is, how much budget is directed to English-speaking markets and / or to a small number of markets that are afraid of regulation. It’s one of the core issues amplified by the leak of so many internal Facebook documents. And by giving Facebook’s AI model contextual transparency, he’s global about how powerful platforms operate (and what they prioritize and what they don’t). Suggested that the lack of fairness can be dealt with. He noted that this would require detailed information on market, language, safety systems, and targeted cohort units, in addition to general performance indicators.
Having Facebook address security as a systematic requirement not only solves the problems caused by the platform in markets across Europe, but also “for those who live in vulnerable regions of the world and are less influential. It will also raise your voice, “he claims. And he added as follows. “The most linguistically diverse regions of the world are often the most vulnerable regions, with the need for European intervention. Europe is influential and of the people of those regions. That’s why we can really exert our strength. “
Many of Mr. Hogen’s remarks were familiar from previous testimonies and press conferences. Meanwhile, in Q & A, many EU legislators may be able to solve Facebook’s problem of amplifying harmful content with a total ban on micro-targeting / behavioral advertising (which is being actively discussed in Congress). I tried to draw his voice into the issue. This means that the ad tech giant will not be able to use the information of the people behind it to profit through data-driven manipulation.
Hogen disputed this and said he would support allowing people to choose whether or not to target advertising, rather than letting regulators decide.
Instead of a total ban, he suggested that “specific matters and advertisements[…]actually need to be regulated,” citing advertising fees as one of the areas subject to regulation. “The current system is subsidizing hate, which means that it’s 5 to 10 times cheaper to run hate-like political ads than to run non-hate ads. Given that, advertising fees I think we need to make it uniform, “he explained. “But I also think we should regulate ads that target specific people.”
“I don’t know if you know, but it’s possible to target an audience of 100 people for a particular ad. I’m pretty sure it’s being abused. It’s overexposed to political ads. Analyzing what kind of person it is, it’s not surprising, but it’s the people of Washington, DC who are most affected, which is extremely overexposed. We have thousands of politics a month. We’re talking about advertising, so I don’t think it’s acceptable to have a mechanism to target specific people without their awareness. “
Hogen also noted that Facebook is using third-party data sources to enrich user profiles for ad targeting purposes and argued that it would be banned.
“I don’t think we should allow third-party data to be retrieved when it comes to profiling and data retention. Facebook works with credit card companies and other forms, which is the profitability of their ads. “We are raising it from the ground up,” he added. “I think we need to consent each time we work with a data source. People should find it very unpleasant to know that Facebook has some of their data.”
However, when it comes to behavioral advertising, we are cautiously avoiding support for a total ban.
It was an interesting ripple that occurred during this session. The issue had momentum within the EU, including the consequences of Mr. Hogen’s own whistleblowers aggravating local legislators’ concerns about Facebook. And Mr. Hogen may have helped to evoke it (but chose not to).
“When it comes to targeted advertising, we strongly recommend that people should be able to choose how they are targeted, and we recommend that we ban dark patterns that force people to make choices.” He said in one answer (but details how regulators can make effective legislation for cynical and multifaceted elements like “dark pattern design”. I haven’t touched it).
“The platform needs to be transparent about how to use that data,” he relied on repeating the next proposal after communicating the essence of all of his proposals. “I strongly advocate that the platform should publish a policy that requires the platform to provide a uniform advertising rate for all political ads. Political ads should not subsidize hate.”
His allegations against banning behavioral advertising appear to be concentrated (or rather dependent on) for regulators to achieve fully comprehensive platform transparency. It allows Facebook (and other peers) to use people’s data to provide an exact picture of what they’re actually doing, that is, to make a true choice as to whether users want such targeting. To do. Therefore, fulfilling full accountability is important.
But at another point in the session, it was after children were asked if they could really agree to process data with a platform like Facebook, but Hogen said adults as well as children. He also argued that he was wondering if he could (at this point) understand what Facebook was doing with their data.
“As for whether children can understand what information they are trading, I think we, as adults, almost certainly don’t understand what we are trading. “Masu,” he told lawmakers. “We don’t know what the algorithm contains or whether it is targeted in such a way that children are given informed consent. I think they are given informed consent. No, and children’s abilities are limited. “
With this in mind, his belief: “Comprehensive transparency as described above is possible and makes a truly informed decision as to whether or not all adults will accept operational behavioral advertising. It will create a universally comprehensive composition of data-driven operations that can be done. “
Hogen’s logic: that regulators communicate inappropriately / inaccurately about everything that is offered to users, and / or regulators guarantee users proper and universal education about their risks and rights. The risk of data-driven exploitation going on (with the free pass right now built into the law) if he follows the solution he proposed for the fundamental lack of transparency, including not doing so. Certainly there is.
Her discussion here seemed inconsistent. His objection to banning behavioral advertising, and therefore to addressing one fundamental incentive that contributes to the operational harm of social media, is more than logical. Rather, it seems to be ideological.
(Sure, there seems to be a need for a leap in the belief that governments around the world can urgently introduce a highly functional “full” surveillance feature, as he claims. At the same time, he has spent weeks insisting on legislators that the platform can only be interpreted as a highly context-specific, data-detailed algorithm machine. Given Facebook’s “amazing” amount of data, as mentioned in this question and answer session, not to mention the size of the task in front of us. Has been shown to be too huge for regulators)
This is probably the point of view expected of data scientists, not rights experts.
(His immediate refusal to ban behavioral advertising, as mentioned above, is focused on manipulating algorithms and data through the black box, rather than outside the machine where the harm is felt. It can be said that it is a kind of trigger reaction as seen in the insider of the platform that has been doing it)
Elsewhere during the session, Hogen further complicated his claim for thorough transparency as the only panacea for social media issues. The EU has warned against leaving the enforcement of such complex issues to up to 27 state agencies.
If the EU does, DSA will fail, he suggested. Instead, he advised legislators to address central EU bureaucracy to address the implementation of highly detailed, layered and dynamic rules, which he points out is necessary to embrace Facebook-level platforms. It was to make a mechanism. He also promotes ex-industry algorithm experts like himself to find a “place” there, support their expertise and “return by contributing to public accountability.” Proposed to be done.
“The number of formal experts in these areas is very small worldwide, how the algorithms actually work and what the results are. There is no master’s or doctoral degree, so you need to work for one of the companies in the field and receive on-the-job training in-house, “he added. “I am deeply concerned that if we delegate this feature to 27 member states, we will not be able to earn critical mass in one place.”
“It will be very difficult to secure enough professionals and spread them widely.”
Very much warn legislators that the platform needs to reveal malicious details in selfish datasets and “vulnerable” AI to prevent it from easily deceiving people’s eyes. In many cases, it seems doctrine that Mr. Hogen opposes that regulators actually set simple restrictions, such as not using personal data in advertising.
He was also directly asked by members of the European Parliament whether regulators should limit what the platform can do with the data and / or the inputs that can be used in the algorithm. Against this question, he prioritized transparency over restrictions (but elsewhere, as mentioned earlier, he couldn’t get Facebook to get third-party datasets to enhance ad profiling. At least it argues that it should be banned).
After all, the algorithm’s expert ideology seems to have some blind spots in thinking outside the black box on how to come up with effective regulations for data-driven software machines.
Some degree of sudden braking may be necessary for democratic societies to regain control of data mining technology giants.
Therefore, Hogen’s greatest advocacy would be a highly detailed warning about the risk of loopholes that could fatally ruin digital regulation. He is undoubtedly correct in that the risks here are multidimensional.
At the beginning of the presentation, he pointed out another possible loophole. He urged legislators not to exclude news media content from the DSA (again, one of the amendments being considered by lawmakers). “If you want to create a rule of content neutrality, you really have to be neutral,” he insisted. “Nothing is chosen and nothing is excluded.”
“Every modern disinformation campaign will unfairly use news media channels on digital platforms by manipulating the system,” he warned. “If the DSA makes it illegal for the platform to address these issues, we risk compromising the validity of the law. In fact, it can be worse than it is today.” ..
During the question and answer session, Mr. Hogen asked some questions from lawmakers about new challenges that regulators may face in light of Facebook’s planned shift towards building the so-called “metaverse.” I received it.
Related article: Zuckerberg tells investors that 110 trillion yen Facebook will be a “metaverse” company
He told lawmakers that he was “extremely concerned” and warned that the spread of metaverse supply sensors in homes and offices could increase data collection.
He also said that Facebook’s focus on developing workplace tools could create a situation where opt-out isn’t even an option, given that employees have little say in business tools. He expressed concern about it. This suggests that people may face a dystopian choice between Facebook ad profiling and making a living in the future.
Facebook’s new focus on the “Metaverse” highlights what Hogen called Facebook’s “meta problem.” This also means that the company prioritizes “going forward” rather than ending and fixing problems caused by current technology.
Regulators must put a lever on the Jaguar Note to plan a new direction with a focus on safety, he emphasized.
Image credit: BENOIT DOPPAGNE / BELGA MAG / AFP / Getty Images under a license.
[To the original text]
(Sentence: Natasha Lomas, Translation: Dragonfly)