30 posts tagged with ai by chavenet.
Displaying 1 through 30 of 30.

Long after we are gone, our data will be what remains of us

In this sense, the archival violence inflicted by Artificial Intelligence differs from that of a typical archive because the information stored within an AI system is, for all intents and purposes, a black box. It’s an archive built for a particular purpose, but inherently never meant to be seen—it is the apotheosis of information-as-exchange-value, the final untethering of reality from sense. The opaqueness of this archive returns us to the initial question of capitalism without humans, of an archive without a reader, of form without content. When we are gone, is it this form of control that will remain our record of existence? from An Archive at the End of the World
posted by chavenet on May 19, 2024 - 2 comments

In AI, it’s easy to argue about philosophical questions over-much

So please, remember: there are a very wide variety of ways to care about making sure that advanced AIs don’t kill everyone. Fundamentalist Christians can care about this; deep ecologists can care about this; solipsists can care about this; people who have no interest in philosophy at all can care about this. Indeed, in many respects, these essays aren’t centrally about AI risk in the sense of “let’s make sure that the AIs don’t kill everyone” (i.e., “AInotkilleveryoneism”) – rather, they’re about a set of broader questions about otherness and control that arise in the context of trying to ensure that the future goes well more generally. from Otherness and control in the age of AGI by Joe Carlsmith [more inside]
posted by chavenet on May 9, 2024 - 12 comments

Powered by Techno-Guff

Autonomous car racing is a rapidly advancing field that combines cutting-edge technologies such as artificial intelligence (AI), fast mobility stacks, innovative sensor technologies and edge computing to create high-performance vehicles that can perceive their surroundings, make decisions, and race competitively without human intervention. [more inside]
posted by chavenet on Apr 13, 2024 - 16 comments

"AI-powered relationship coaching for a new generation of lonely adults"

It was clear to Nyborg that apps such as Tinder were failing their users: designed to keep them coming back, rather than to find a partner and never return. In that moment, it wasn’t fear she felt but empathy. Through letters like this one she had learnt a lot about a particular group of Tinder’s users: those who were “incredibly lonely” ... When she quit, several investors reached out to Nyborg, asking if she planned to start another dating app. Instead Nyborg took a different turn. She began researching loneliness. The new app she came up with looked very different from Tinder. from The loneliness cure [Financial Times; ungated]
posted by chavenet on Apr 11, 2024 - 51 comments

If It Ain't Woke, Don't Fix It

As we have seen before with other image models like DALLE-3, the AI is taking your request and then modifying it to create a prompt. Image models have a bias towards too often producing the most common versions of things and lacking diversity (of all kinds) and representation, so systems often try to fix this by randomly appending modifiers to the prompt. The problem is that Gemini’s version does a crazy amount of this and does it in ways and places where doing so is crazy. from The Gemini Incident by Zvi Mowshowitz [Part I, Part II] [more inside]
posted by chavenet on Feb 28, 2024 - 48 comments

Those seams we are seduced into not seeing

Let me offer a couple examples of how the arts challenge AI. First, many have pointed out that storytelling is always needed to make meaning out of data, and that is why humanistic inquiry and AI are necessarily wed. Yet, as N. Katherine Hayles (2021: 1605) writes, interdependent though they may be, database and narrative are “different species, like bird and water buffalo.” One of the reasons, she notes, is the distinguishing example of indeterminacy. Narratives “gesture toward the inexplicable, the unspeakable, the ineffable” and embrace the ambiguity, while “databases find it difficult to tolerate”. from Poetry Will Not Optimize; or, What Is Literature to AI?
posted by chavenet on Feb 25, 2024 - 4 comments

The underlying technocratic philosophy of inevitability

Silicon Valley still attracts many immensely talented people who strive to do good, and who are working to realize the best possible version of a more connected, data-rich global society. Even the most deleterious companies have built some wonderful tools. But these tools, at scale, are also systems of manipulation and control. They promise community but sow division; claim to champion truth but spread lies; wrap themselves in concepts such as empowerment and liberty but surveil us relentlessly. The values that win out tend to be the ones that rob us of agency and keep us addicted to our feeds. from The Rise of Techno-Authoritarianism by Adrienne LaFrance [The Atlantic; ungated]
posted by chavenet on Feb 21, 2024 - 23 comments

The Premonition of a Fraying

"For me, a luddite is someone who looks at technology critically and rejects aspects of it that are meant to disempower, deskill or impoverish them. Technology is not something that’s introduced by some god in heaven who has our best interests at heart. Technological development is shaped by money, it’s shaped by power, and it’s generally targeted towards the interests of those in power as opposed to the interests of those without it. That stereotypical definition of a luddite as some stupid worker who smashes machines because they’re dumb? That was concocted by bosses.” from 'Humanity’s remaining timeline? It looks more like five years than 50’: meet the neo-luddites warning of an AI apocalypse [Grauniad; ungated] [CW: Yudkowski] [more inside]
posted by chavenet on Feb 20, 2024 - 77 comments

By any other name

What is a rose, visually? A rose comprises its intrinsics, including the distribution of geometry, texture, and material specific to its object category. With knowledge of these intrinsic properties, we may render roses of different sizes and shapes, in different poses, and under different lighting conditions. In this work, we build a generative model that learns to capture such object intrinsics from a single image, such as a photo of a bouquet. Such an image includes multiple instances of an object type. These instances all share the same intrinsics, but appear different due to a combination of variance within these intrinsics and differences in extrinsic factors, such as pose and illumination. Experiments show that our model successfully learns object intrinsics (distribution of geometry, texture, and material) for a wide range of objects, each from a single Internet image. Our method achieves superior results on multiple downstream tasks, including intrinsic image decomposition, shape and image generation, view synthesis, and relighting. from Seeing a Rose in Five Thousand Ways
posted by chavenet on Feb 18, 2024 - 1 comment

A Kind of Kinky Turing Test

This adds another layer: Beyond just being financially dominated, the user is further made lesser by the fact that they’re being dominated by something that isn’t even trying to seem human ... “A lot of kink and submission also has to do with ‘depersonalization.’ I think that being dominated by AI is just a way to feel further separated from one’s human identity,” Witt-Eden explained. “By interacting with an inanimate computer program, one also becomes an inferior object.” from Welcome to the Kinky World of AI Financial Domination [more inside]
posted by chavenet on Nov 14, 2023 - 27 comments

It's totally reasonable to be able to say, ‘Hey, don't use my stuff'

While Presser sees Books3 as a contribution to science, others view his data set in a far less flattering light, and see him as sincere but deeply misguided. For critics, Books3 isn’t a boon to society—instead, it’s emblematic of everything wrong with generative AI, a glaring example of how both the rights and preferences of artists are disregarded and disrespected by the AI industry’s main players, and something that straight-up shouldn’t exist. from The Battle Over Books3 Could Change AI Forever [more inside]
posted by chavenet on Oct 8, 2023 - 84 comments

The Evidence for Better-Than-Human Performance is Starting to Pile Up

Human beings drive close to 100 million miles between fatal crashes, so it will take hundreds of millions of driverless miles for 100 percent certainty on this question. But the evidence for better-than-human performance is starting to pile up, especially for Waymo. It’s important for policymakers to allow this experiment to continue because, at scale, safer-than-human driving technology would save a lot of lives. from Are self-driving cars already safer than human drivers? [more inside]
posted by chavenet on Sep 12, 2023 - 99 comments

Bit Nap

Sleep is a liability for creatures as soft and tasty as humans. If humans have evolved into such a liability, there must be a benefit to balance the risk. There is evidence that sleep in general and dreaming specifically provides a state for association. In our dreams, we can revisit memories and make connections in ways not possible while consumed with the activity of consciousness. This would allow strengthening associations through repetition without having to repeat the physical event ... While computers are ideal for multitasking, they have performance differences in trying to develop associations as events unfold versus processing and pruning after the fact, once removed from the situation (like humans, robots benefit from hindsight even without rear-facing sensors). Despite the apparent benefits, when dividing human and machine tasks for optimization, one would expect resistance to signing robots up for nap time. from Why Do Androids Dream of Electric Sheep? [Modern War Institute] [more inside]
posted by chavenet on Aug 1, 2023 - 19 comments

A Index of the Insanity of Our World

There is so much in Weizenbaum’s thinking that is urgently relevant now. Perhaps his most fundamental heresy was the belief that the computer revolution, which Weizenbaum not only lived through but centrally participated in, was actually a counter-revolution. It strengthened repressive power structures instead of upending them. It constricted rather than enlarged our humanity, prompting people to think of themselves as little more than machines. By ceding so many decisions to computers, he thought, we had created a world that was more unequal and less rational, in which the richness of human reason had been flattened into the senseless routines of code. from ‘A certain danger lurks there’: how the inventor of the first chatbot turned against AI [Grauniad; ungated]
posted by chavenet on Jul 26, 2023 - 28 comments

An Unprecedented Feat of Tedious and Repetitive Labor

Much of the public response to language models like OpenAI’s ChatGPT has focused on all the jobs they appear poised to automate. But behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behavior, and even less is known about the people doing the shaping. from AI Is a Lot of Work [Intelligencer; ungated]
posted by chavenet on Jun 21, 2023 - 21 comments

AI am a Camera

Paragraphica is a context-to-image camera that uses location data and artificial intelligence to visualize a "photo" of a specific place and moment. The camera exists both as a physical prototype and a virtual camera that you can try. Project by Bjoern Karmann [CW: you will be interacting with AI]
posted by chavenet on Jun 3, 2023 - 9 comments

AI-hab: All My Means Are Sane, My Motive and My Object Mad

A boggy, soggy, squitchy picture truly, enough to drive a nervous man distracted. Yet was there a sort of indefinite, half-attained, unimaginable sublimity about it that fairly froze you to it, till you involuntarily took an oath with yourself to find out what that marvellous painting meant. Ever and anon a bright, but, alas, deceptive idea would dart you through.—It’s the Black Sea in a midnight gale.—It’s the unnatural combat of the four primal elements.—It’s a blasted heath.—It’s a Hyperborean winter scene.—It’s the breaking-up of the icebound stream of Time. But at last all these fancies yielded to that one portentous something in the picture’s midst. That once found out, and all the rest were plain. But stop; does it not bear a faint resemblance to a gigantic fish? even the great leviathan himself? from Chaos Bewitched: Moby-Dick and AI by Eigil zu Tage-Ravn
posted by chavenet on Apr 24, 2023 - 14 comments

The Internet is Not the Tool. I Am the Tool

At all times, I understand that the internet is using data I somehow gave it, and that those processes and technologies are now too complex for me to track. But it feels aggressive to me, in the way it would feel aggressive if suddenly every kind of advertisement everywhere you went in the world was designed only for you. When I say the new situation feels aggressive, I am anthropomorphizing the internet, but in theory the internet is a web of anthros, so that statement might be nonsensical. But is the internet the people? Or is it everything the people see and hear and know and make up, without the people? from You Have a New Memory by Merritt Tierce [Slate; ungated]
posted by chavenet on Apr 22, 2023 - 9 comments

More or Less Stable Chaos

Even tyrants would be foolish to pass down an iron law when a low-key change of norms would lead to the same results. And there is no question that changes of norms in Western countries since the beginning of the pandemic have given rise to a form of life plainly convergent with the Chinese model. Again, it might take more time to get there, and when we arrive, we might find that a subset of people are still enjoying themselves in a way they take to be an expression of freedom. But all this is spin, and what is occurring in both cases, the liberal-democratic and the overtly authoritarian alike, is the same: a transition to digitally and algorithmically calculated social credit, and the demise of most forms of community life outside the lens of the state and its corporate subcontractors. from Permanent Pandemic by Justin E.H. Smith [Harpers; Archive] [more inside]
posted by chavenet on May 31, 2022 - 48 comments

Permutation.City

CW: flashing throughout video An experiment in AI assisted video composition, starring the Storror parkour team. [via The Awesomer]
posted by chavenet on Aug 19, 2021 - 6 comments

William Shatner is Still Alive

It was William Shatner's 90th birthday on March 22. As a gift, he gave himself to the future.
posted by chavenet on Mar 23, 2021 - 34 comments

Flim is the Thing

Flim is a movie search engine currently in beta that returns screenshots from movies based on keywords. [Via Kottke & Boing Boing & Recomendo]
posted by chavenet on Feb 28, 2021 - 16 comments

Pikachu's Basilisk

Matthew Rayfield, a programmer who makes mobile and web-based toys, created 3,000 new Pokémon using open-source AI models. Via Vice
posted by chavenet on Nov 16, 2020 - 11 comments

Algonuts

Certain artists are highly productive and constrain themselves to a particular style and format for their entire careers. Charles Shulz, the creator and artist of the Peanuts comic strip, produced thousands of comics over 50 years. As a result, he is one of the few artists who have enough ‘content’ to train a styleGAN2 model. By extracting each frame from nearly 18,000 comic strips I was able to harvest 63,800 distinct images featuring Charlie, Snoopy, Peppermint Patty and the rest of the gang – plenty of food for the network to chew on. Several hundred hours of computational time later, a network containing the ‘visual DNA’ of Peanuts emerged.
posted by chavenet on Jun 22, 2020 - 31 comments

This Meme Does Not Exist

AI Memes by Imgflip with a stream of Memes generated by the AI meme generator
posted by chavenet on Apr 29, 2020 - 46 comments

The Quality of Mercy Is Not Strnen

Finally, I crossed my Rubicon. The sentence itself was a pedestrian affair. Typing an e-mail to my son, I began “I am p—” and was about to write “pleased” when predictive text suggested “proud of you.” I am proud of you. Wow, I don’t say that enough. And clearly Smart Compose thinks that’s what most fathers in my state say to their sons in e-mails. I hit Tab. No biggie. And yet, sitting there at the keyboard, I could feel the uncanny valley prickling my neck. It wasn’t that Smart Compose had guessed correctly where my thoughts were headed—in fact, it hadn’t. The creepy thing was that the machine was more thoughtful than I was. From The Next Word, a longform look at machine-enabled writing and predictive text by John Seabrook in The New Yorker
posted by chavenet on Oct 8, 2019 - 28 comments

Stop Player, Joke #4

As the perforated rolls of the player piano prefigured the punch cards of early computing, so, too, have they shaped how we talk about creative machines. Like the ghostly hands that played upon pianola keys, AI art stokes deep cultural anxieties about the risks automation poses to human activity. Ultimately, we fear that they will replace us, whether at the factory or at the canvas. From Ghost Hands, Player Pianos, and the Hidden History of AI by Vanessa Chang [LARB] [more inside]
posted by chavenet on Oct 6, 2019 - 5 comments

Please Reserve a Table for Two, at 8 pm Thursday, at the Uncanny Valley

From the human end, Duplex's voice is absolutely stunning over the phone. It sounds real most of the time, nailing most of the prosodic features of human speech during normal talking. The bot "ums" and "uhs" when it has to recall something a human might have to think about for a minute. It gives affirmative "mmhmms" if you tell it to hold on a minute. Everything flows together smoothly, making it sound like something a generation better than the current Google Assistant voice. Talking to Google Duplex: Google’s human-like phone AI feels revolutionary [ArsTechnica] [more inside]
posted by chavenet on Jul 2, 2018 - 36 comments

Slaughterbots

UC Berkeley professor Stuart Russell and the Future of Life Institute have created an eerie viral video titled "Slaughterbots" that depicts a future in which humans develop small, hand-sized drones that are programmed to identify and eliminate designated targets. In the video above, the technology is initially developed with the intention of combating crime and terrorism, but the drones are taken over by an unknown forces who use the powerful weapons to murder a group of senators and college students. UC Berkeley professor's eerie lethal drone video goes viral [Warning: graphic violence]
posted by chavenet on Nov 19, 2017 - 64 comments

We Use 'em to Spot Terminators

Fido vs Spot [SLYT]
posted by chavenet on Mar 1, 2016 - 18 comments

Page: 1