This week's notable links
This is my regular digest of links and media I found notable over the last week. Did I miss something? Let me know!
Succor borne every minute
[Michael Atleson at the FTC Division of Advertising Practices]
"Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods."
The FTC gets involved in the obviously rife practice of overselling the capabilities of AI services. These are solid guidelines, and hopefully the precursor to more meaningful action when vendors inevitably cross the line.
While these points are all important, for me the most pertinent is the last:
"Don’t violate consumer privacy rights. These avatars and bots can collect or infer a lot of intensely personal information. Indeed, some companies are marketing as a feature the ability of such AI services to know everything about us. It’s imperative that companies are honest and transparent about the collection and use of this information and that they don’t surreptitiously change privacy policies or relevant terms of service."
It's often unclear how much extra data is being gathered behind the scenes when AI features are added. This is where battles will be fought and lines will be drawn, particularly in enterprises and well-regulated industries.
[Link]
United Airlines seat ads: How to opt out of targeted advertising
[Michael Grothaus at FastCompany]
"United Airlines announced that it is bringing personalized advertising to the seatback entertainment screens on its flights. The move is aimed at increasing the airline’s revenue by leveraging the data that it has on its passengers."
Just another reason why friends don't let friends fly United. We should all be reducing our air travel overall anyway, given the climate crisis, and in a world where we all fly less, shouldn't we choose a better experience?
This sounds like the absolute worst:
"United believes its advertising network will be appealing to brands because “there is the potential for 3.5 hours of attention per traveler, based on average flight time.”"
Passengers from California, Colorado, Connecticut, Virginia, and Utah can opt out of having their private information used to show targeted ads to them for the duration of what sounds like an agonizing flight. Passengers from other US States are out of luck - at least until their legislatures also pass reasonable privacy legislation.
Other airlines are removing seat-back entertainment to reduce fuel, so on top of the baseline climate impact of the air travel industry, there's a real additional climate implication here. Planes with seat-back entertainment, in general, use more fuel; United is making a revenue decision with all kinds of negative impacts that they should not be rewarded for.
[Link]
Perplexity AI Is Lying about Their User Agent
Perplexity AI doesn't use its advertised browser string or IP range to load content from third-party websites:
"So they're using headless browsers to scrape content, ignoring robots.txt, and not sending their user agent string. I can't even block their IP ranges because it appears these headless browsers are not on their IP ranges."
On one level, I understand why this is happening, as everyone who's ever written a scraper (or scraper mitigations) might: the crawler for training the model likely does use the correct browser string, but on-demand calls likely don't to prevent them from being blocked. That's not a good excuse at all, but I bet that's what's going on.
This is another example of the core issue with robots.txt: it's a handshake agreement at best. There are no legal or technical restrictions imposed by it; we all just hope that bots do the right thing. Some of them do, but a lot of them don't.
The only real way to restrict these services is through legal rules that create meaningful consequences for these companies. Until then, there will be no sure-fire way to prevent your content from being accessed by an AI agent.
[Link]
Pentagon ran secret anti-vax campaign to incite fear of China vaccines
[Chris Bing and Joel Schechtman at Reuters]
"The U.S. military launched a clandestine program amid the COVID crisis to discredit China’s Sinovac inoculation – payback for Beijing’s efforts to blame Washington for the pandemic. One target: the Filipino public. Health experts say the gambit was indefensible and put innocent lives at risk."
Reading this, it certainly seems indefensible, although unfortunately not out of line with other US foreign policy efforts. Innocent people died because of this US military operation.
It's a reflection of the simple idea, which seems to have governed US foreign policy for almost a century, that foreign lives matter less in the quest for dominance over our perceived rivals.
Even if you do care about America more than anywhere else, this will have hurt at home, too. The internet being what it is, it also would make sense that these influence campaigns made their way back to the US and affected vaccine uptake on domestic soil.
The whole thing feels like the military equivalent of a feature built by a novice product manager: someone had a goal that they needed to hit, and this was how they decided to get there. But don't get me wrong: I don't think this was an anomaly or someone running amok. This was policy.
[Link]
On being human and "creative"
"What generative AI creates is not any one person's creative expression. Generative AI is only possible because of the work that has been taken from others. It simply would not exist without the millions of data points that the models are based upon. Those data points were taken without permission, consent, compensation or even notification because the logistics of doing so would have made it logistically improbable and financially impossible."
This is a wonderful piece from Heather Bryant that explores the humanity - the effort, the emotion, the lived experience, the community, the unique combination of things - behind real-world art that is created by people, and the theft of those things that generative AI represents.
It's the definition of superficiality, and as Heather says here, living in a world made by people, rooted in experiences and relationships and reflecting actual human thought, is what I hope for. Generative AI is a technical accomplishment, for sure, but it is not a humanist accomplishment. There are no shortcuts to the human experience. And wanting a shortcut to human experience in itself devalues being human.
[Link]
The Encyclopedia Project, or How to Know in the Age of AI
[Janet Vertesi at Public Books]
"Our lives are consumed with the consumption of content, but we no longer know the truth when we see it. And when we don’t know how to weigh different truths, or to coordinate among different real-world experiences to look behind the veil, there is either cacophony or a single victor: a loudest voice that wins."
This is a piece about information, trust, the effect that AI is already having on knowledge.
When people said that books were more trustworthy than the internet, we scoffed; I scoffed. Books were not infallible; the stamp of a traditional publisher was not a sign that the information was correct or trustworthy. The web allowed more diverse voices to be heard. It allowed more people to share information. It was good.
The flood of automated content means that this is no longer the case. Our search engines can't be trusted; YouTube is certainly full of the worst automated dreck. I propose that we reclaim the phrase pink slime to encompass this nonsense: stuff that's been generated by a computer at scale in order to get attention.
So, yeah, I totally sympathize with the urge to buy a real-world encyclopedia again. Projects like Wikipedia must be preserved at all costs. But we have to consider if all this will result in the effective end of a web where humans publish and share information. And if that's the case, what's next?
[Link]
Microsoft Refused to Fix Flaw Years Before SolarWinds Hack
"Former [Microsoft] employee says software giant dismissed his warnings about a critical flaw because it feared losing government business. Russian hackers later used the weakness to breach the National Nuclear Security Administration, among others."
This is a damning story about profit over principles: Microsoft failed to close a major security flaw that left the government (alongside other customers) vulnerable because it wanted to win their business. This directly paved the way for the SolarWinds hack.
This doesn't seem to have been covert or subtext at Microsoft:
"Morowczynski told Harris that his approach could also undermine the company’s chances of getting one of the largest government computing contracts in U.S. history, which would be formally announced the next year. Internally, Nadella had made clear that Microsoft needed a piece of this multibillion-dollar deal with the Pentagon if it wanted to have a future in selling cloud services, Harris and other former employees said."
But publicly it said something very different:
"From the moment the hack surfaced, Microsoft insisted it was blameless. Microsoft President Brad Smith assured Congress in 2021 that “there was no vulnerability in any Microsoft product or service that was exploited” in SolarWinds."
It will be interesting to see what the fallout of this disclosure is, and whether Microsoft and other companies might be forced behave differently in the future. This story represents business as usual, and without external pressure, it's likely that nothing will change.
[Link]
Calm Company Fund is taking a break
"Inhale. Exhale. Find the space between… Calm Company Fund is going on sabbatical and taking a break from investing in new companies and raising new funds. Here’s why."
Calm Company Fund's model seems interesting. It's a revenue-based investor that makes a return based on its portfolio companies' earnings, but still uses a traditional VC model to derive its operating budget. That means it makes a very small percentage of funds committed from Limited Partners, rather than sharing in the success of its portfolio (at least until much later, when the companies begin to earn out).
That would make sense in a world where the funds committed were enormous, but revenue-based investment tends to raise smaller fund sizes. So Calm Company Fund had enough money to pay for basically one person - and although the portfolio was growing, the staff size couldn't scale up to cope.
So what does an alternative look like? I imagine that it might look like taking a larger percentage of incoming revenue as if it were an LP itself. Or maybe this kind of funding simply doesn't work with a hands-on firm, and the models that attract larger institutional investors are inherently more viable (even if that isn't always reflected in their fund returns).
I want something like this to exist, but the truth is that it might live in the realm of boring old business loans, and venture likely is able to exist because of the risks involved in those sorts of companies.
[Link]
These Wrongly Arrested Black Men Say a California Bill Would Let Police Misuse Face Recognition
"Now all three men are speaking out against pending California legislation that would make it illegal for police to use face recognition technology as the sole reason for a search or arrest. Instead it would require corroborating indicators."
Even with mitigations, it will lead to wrongful arrests: so-called "corroborating indicators" don't assist with the fact that the technology is racially biased and unreliable, and in fact may provide justification for using it.
And the stories of this technology being used are intensely bad miscarriages of justice:
“Other than a photo lineup, the detective did no other investigation. So it’s easy to say that it’s the officer’s fault, that he did a poor job or no investigation. But he relied on (face recognition), believing it must be right. That’s the automation bias this has been referenced in these sessions.”
"Believing it must be right" is one of core social problems widespread AI is introducing. Many people think of computers as being coldly logical deterministic thinkers. Instead, there's always the underlying biases of the people who built the systems and, in the case of AI, in the vast amounts of public data used to train them. False positives are bad in any scenario; in law enforcement, it can destroy or even end lives.
[Link]
Justice Alito Caught on Tape Discussing How Battle for America ‘Can’t Be Compromised’
[Tessa Stuart and Tim Dickinson at Rolling Stone]
"Justice Samuel Alito spoke candidly about the ideological battle between the left and the right — discussing the difficulty of living “peacefully” with ideological opponents in the face of “fundamental” differences that “can’t be compromised.” He endorsed what his interlocutor described as a necessary fight to “return our country to a place of godliness.” And Alito offered a blunt assessment of how America’s polarization will ultimately be resolved: “One side or the other is going to win.”"
If what's at stake in the upcoming election wasn't previously clear, this makes it so. This is a Supreme Court justice, talking openly, on tape, about undermining the rights of people in favor of a Biblical worldview.
It's easy to see this sort of rhetoric as the dying gasps of the 20th century trying to claw back regressive values that we've mostly moved away from. But to do so is to discount it; we have to take this seriously.
It's a little bit heartening to hear that Justice Roberts - also a big-C Conservative - felt differently and held a commitment to the Constitution and the working of the Court. But in the light of a far-right majority comprised of Alito, Clarence Thomas, Neil Gorsuch, Brett Kavanaugh, and Amy Coney Barrett, it's not heartening enough.
[Link]
ORG publishes digital rights priorities for next government
"Open Rights Group has published its six priorities for digital rights that the next UK government should focus on."
These are things every government should provide. I'm particularly interested in point number 3:
"Predictive policing systems that use artificial intelligence (AI) to ‘predict’ criminal behaviour undermine our right to be presumed innocent and exacerbate discrimination and inequality in our criminal justice system. The next government should ban dangerous uses of AI in policing."
It's such a science fiction idea, so obviously flawed that Philip K Dick wrote a novel and there's a famous movie about how bad it is, and yet, police forces around the world are trying it.
I'd hope for beyond an Open Rights Group recommendation: it should be banned, everywhere, as an obvious human rights violation.
The other things on the list are table stakes. Without those guarantees, real democratic freedom is impossible.
[Link]
Study finds 1/4 of bosses hoped RTO would make staff quit
[Brandon Vigliarolo at The Register]
"The findings suggest the return to office movement has been a poorly-executed failure, but one particular figure stands out - a quarter of executives and a fifth of HR professionals hoped RTO mandates would result in staff leaving."
Unsurprising but also immoral: these respondents believed that subsequent layoffs were undertaken because too few people quit in the wake of return to office policies.
This quote from the company that conducted the survey seems obviously true to me:
"The mental and emotional burdens workers face today are real, and the companies who seek employee feedback with the intent to listen and improve are the ones who will win."
It's still amazing to me that so many organizational cultures are incapable of following through with this.
[Link]
Former Politico Owner Launches New Journalism Finishing School To Try And Fix All The ‘Wokeness’
"There’s an ocean of problems with journalism, but the idea that there’s just too damn much woke progressivism is utter delusion. U.S. journalism generally tilts center right on the political spectrum."
This is a story about the founder of Politico creating a "teaching hospital for journalists" that appears to be in opposition to "wokeness". But it's also about much of the state of incumbent journalism, which is still grappling with the wave of much-needed social change that is inspiring movements around the world.
"In the wake of Black Lives Matter and COVID there was some fleeting recommendations to the ivy league establishment media that we could perhaps take a slightly more well-rounded, inclusive approach to journalism. In response, the trust fund lords in charge of these establishment outlets lost their [...] minds, started crying incessantly about young journalists “needing safe spaces,” and decided to double down on all their worst impulses, having learned less than nothing along the way."
Exactly. Asinine efforts like anti-woke journalism schools aren't what we need; we need better intersectional representation inside newsrooms, we need better representation of the real stories that need to be told across the country and across the world, and we need to dismantle institutional systems that have acted as gatekeepers for generations.
All power to the outlets, independent journalists, and foundations that are truly trying to push for something better. The status quo is not - and has not been - worth preserving.
[Link]