FEC forgoes new AI rulemaking ahead of election



AP24261664705972 e1726765571898

A bipartisan cohort of commissioners on the government’s campaign finance agency voted Thursday to forgo new rulemaking on artificial intelligence, citing a lack of authority to limit or prohibit the use of the developing technology in federal elections.

The nonprofit watchdog Public Citizen had asked the Federal Election Commission (FEC) in May 2023 to revise the existing ban on the fraudulent misrepresentation of campaign authority to make clear that it prohibits deliberately deceptive campaign ads that use AI.

The FEC voted 5-1 to approve a compromise crafted by Democratic Commissioners Dara Lindenbaum and Shana Broussard and Republican Commissioners Trey Trainor and Allen Dickerson, which instead issues an interpretive rule clarifying that AI falls under existing regulations barring fraudulent misrepresentation.

“Four of us have been working together on this to make sure that we could give a clear answer to the public and to the requester in this petition when they asked if artificial intelligence, generative AI used in fraudulent misrepresentation per our statute applies. And we said yes, the statute is technology neutral,” Lindenbaum said.

Lindenbaum also called this “one of those instances where the FEC has worked,” an apparent retort to the common critique that the commission has long been deadlocked and broken.

The FEC is set up with six commissioners, three Republicans and three Democrats, and those commissioners need a majority to take actions, including issuing new rules or advisory opinions or launching investigations. For years, that has proven difficult, especially when not every seat on the commission was filled.

Lindenbaum has extended a hand across the aisle, voting with the Republican commissioners on advisory opinions that have sometimes rankled good governance groups, which have accused her of supporting a deregulatory agenda.

Public Citizen Co-President Robert Weissman criticized the agency’s decision to not issue new rulemaking after the language of the interpretive rule was issued last week.

“[T]he anemic FEC seems to have forgotten its purpose and mission, or perhaps its spine,” Weissman said in a statement.

“The FEC’s new proposed ‘interpretive rule’ simply says that fraudulent misrepresentation law applies no matter what technology is used. That’s a resolution of a question that was never in doubt,” Weissman continued, although he did note the new language “at least leaves the question open” for future petitions.

FEC Chair Sean Cooksey, the one dissenting vote, had pushed for an initial draft declining new rulemaking in August that did not include the interpretive rule.

The commission delayed its response to the rulemaking petition twice before Thursday’s meeting, while the four commissioners worked together to craft the interpretation that was ultimately adopted.

Cooksey came out with a scorching editorial in the Wall Street Journal in August. He argued the commission has “neither the expertise nor the legal authority” to regulate AI and neither do other agencies including the Federal Communications Commission (FCC), which he has criticized for pursuing a proposal that would compel political advertisers to disclose the use of AI in broadcast television and radio ads.

Cooksey also expressed concern Thursday that the new guidance would cause confusion less than seven weeks out from the November election.

“I just worry that it will be misinterpreted, misunderstood, have a potential chilling effect on people who might think it’s prohibiting something new, when in fact it’s not,” Cooksey said.

“We are not changing any rules, changing any regulations, changing any substance of the law. There’s nothing that is going to be made illegal by this interpretive rule tomorrow that isn’t already illegal,” he said.



Source link

About The Author

Scroll to Top