Departments In This Story
Jennifer Gradecki and Derek Curry are associate professors in the Art + Design department at Northeastern.
The two are partners in life, research and art. Together, they study and make artworks about secretive, specialized socio-technical systems.
Some of their recent work has focused on dataveillance technologies, artificial intelligence, and social media disinformation.
One of their recent projects, “Generative Persuasion,” invites viewers to generate tailored propaganda and conspiracy theories on a military-style portable computing center using advanced artificial intelligence and personality-based microtargeting. The goal of the project is to raise awareness to how easy and effective LLMs are at generating inauthentic content designed to influence politics or polarize the public.
For this edition of the Scholar Spotlight series, CTM co-op Melania Diaz interviewed Gradecki and Curry about their research surrounding online misinformation, disinformation and dataveillance.
This interview has been edited for brevity and clarity.

Center for Transformative Media: Thank you both for chatting with me today. So Jennifer, you were one of CAMD’s first AI faculty fellows last year. What was that like for you?
Jennifer Gradecki: As an AI Fellow in Spring 2024, I focused on building college-wide resources, gathering information about past and present AI initiatives in CAMD, and creating an AI literacy workshop about disinformation.
My main project was an AI Showcase website that celebrates the work of CAMD students involving the use of AI in practice, theory or a combination of both. The site is meant to give students a sense of what is possible in CAMD courses and help faculty consider how they might want to integrate AI into their courses.
CTM: Could you tell us about the research projects you’re currently working on?
Gradecki: One of the art projects we’re working on right now is called Generative Persuasion, an interactive installation and a series of workshops that we’ve been touring around. We’re also working on publications about it, which is typically how we work in this kind of multifaceted way. The project uses a large language model (LLM) to create disinformation that is very convincing.

It works very quickly to show people how effective or convincing it actually is, creating a space to allow the participant to think critically. We’re living in an era of technological exuberance without thinking critically about the potential impacts within industry. We want to bring that kind of critical thinking space to the public.
Derek Curry: We did exhibit the project at the Center last spring, but since then we’ve developed it to include visual context, and it’s actually creating deepfake videos now.
Gradecki: None of these deepfakes we’ve generated are published online, for obvious reasons. We don’t want to open this system up on the internet, because we don’t want to encourage people to make disinformation. The disinformation is always only happening locally in public places.
Curry: And we’re also fine-tuning LLMs using reinforcement learning and retrieval augmented generation, which will help give it more character. When you’re producing misinformation or disinformation, it can more closely resemble the communities that you’re targeting, like specific nations or specific cultures within a country. And what we’re trying to do is to see just how targeted these LLMs can be made when they’re weaponized for the spread of disinformation.
Gradecki: Yeah, the project is based on research into how LLMs are already being used, so it’s not speculative. We are creating a corporate fiction, in the sense that we have a spoof start-up called Psybernetica, which is referencing Cambridge Analytica. But it’s based on things that we already know people are doing.
We started this project in 2024, and we thought to ourselves, ‘how would somebody use an LLM to create disinformation?’At that time, OpenAI came out with a report, which said there’s a variety of different countries, state actors specifically, that were using their LLMs to create disinformation, and we ended up creating an app using a locally-run LLM that was not difficult to get off the rails, just simply using system prompts. It’s a little bit more difficult when you interface with something like ChatGPT. You have to use a variety of different prompt-hacking techniques, but when you’re running an LLM locally, it’s easy to get it to make whatever you want, and so microtargeting was easy to create.
Since we debuted the project and have been exhibiting it more, research has come out from a variety of different places, showing that this is exactly what different both state and non-state actors are doing to try to influence people politically. It’s definitely a concern of ours, and we want to raise awareness about this for people.
Curry: We created this program to generate disinformation and about two months later RAND came out with a report detailing the steps that could be taken, and they were exactly what we had already done.
Gradecki: Because we’re using LM Studio and the researchers at RAND were like, ‘you could make a system like this for under $100,000,’ and we’re like, they are really overestimating the cost of this. It’s unfortunately much more accessible. You can run this on a Mac Mini, and that has enough processing power. I think it’s even more accessible than has been reported on.
CTM: What made you choose this topic for your research projects?
Gradecki: I would take it back for us to maybe 2015, when we made our first project that was engaging with dataveillance using A.I. to create predictive analytics.
We found out that we were being targeted and ending up on a watchlist, because we were part of Occupy Los Angeles. We thought it was quite strange that we ended up being on this watchlist and wanted to understand how this happened. This was the first moment that we started to create a surveillance system that the public could access, and we actually had to create our own classifiers and our own data sets.
We didn’t have access to that, and I think it’s difficult for the public to have oversight over what law enforcement and intelligence agencies are doing when we don’t have these data sets. We have a lot of people who don’t have a technological understanding of what’s happening. We basically chose to do it ourselves and find a way to show people how these systems work.

In this instance, it was social media surveillance. Back in 2015, the project was called the Crowdsourced Intelligence Agency, and one of the most impactful things with that project was when people would see their own social media posts decontextualized and then recontextualized in our surveillance system. They might have just been reposting something, or they might have been commenting on something. But it really helped people to understand the way that these technologies can shape how people perceive you.
Curry: For that project, we started with the leaked documents from Snowden, which had a lot of the technical specifications, and so we were able to use a lot of the open-source code to reconstruct something. That was really the start of us trying to create and reverse-engineer systems to let people experience them, because most people will never get to experience a surveillance system.
Then, we started to move into misinformation and disinformation. With online misinformation surrounding COVID, we did a couple of projects and then we got a Media Futures transnational residency grant to create another project called the Epic Sock Puppet Theater, which is using a data set of disinformation that is compiled from a bunch of different data sets. But the largest one came out of the Mueller investigation. It’s Russian IRA sock puppets.
CTM: Why did you two decide to work together on research?
Gradecki: We’re media artists, and in this field, there tend to be collectives that work together, especially outside of the art market and art gallery system. But for us, it’s a way to create things that we wouldn’t be able to do with just one of us.
You know, these are very complex systems. They involve different knowledge sets. We’re bringing together a lot of different fields and disciplines, and just the sheer amount of time that it takes to try to reverse engineer one of these technologies is a challenge. Maybe sometimes we have to re-create an entire data set just to reverse engineer a system. We’re doing what teams of people will do in the industry, but instead with just two people. It’s a small collective, compared to law enforcement or what people are doing at Palantir.
Curry: Yeah, I think Celia Pearce put it succinctly when she was describing our practice. She said that we’re basically working as a small software development company. But because our work is emulating existing technologies as a form of critique, we don’t usually make it far in the granting process. The grants that we do get usually come from Europe.

Gradecki: And our work is usually focusing on art or critical and creative approaches to different social topics or issues, as opposed to promoting technological development. I mean, we’re really critiquing technological solutionism. You can imagine that it’s kind of an uncommon thing right now, given the exuberance around A.I.
CTM: Are there any other challenges you guys have faced while working on your projects?
Curry: An interesting problem that comes up quite a bit is people being resistant to what our projects are illustrating in one way or another.
One example is The Epic Sock Puppet Theater project. It features social media posts that were made by Russian sock puppet agents, or people working for the IRA that are trying to infiltrate groups on the right and the left.
With this project in particular, a lot of people are very open to the critique of the political persuasion that they don’t identify with, but when they see that people like them have also fallen for the same tactics, or that they themselves might have been pulled in by some of the posts the sock puppets made, they sometimes shut down. That’s the thing we’re actually trying to push through.
Interestingly, this project just got censored and is being shown in Australia right now, and the museum informed us that we couldn’t show any of these posts, because they have recently passed more strict laws around social media. We had to rework the project entirely just to exhibit it there. And I think that’s the problem that I find the most interesting. Then there’s also just technical problems and logistical things that we have to contend with.
Gradecki: The topics are challenging, too, and a lot of times if we’re applying for a grant and we’re explicitly critical about the development of a technology in a certain area, we know that that’s going to be a hard sell. So we often end up self-funding, which means our projects are much smaller in scope. But we feel it’s important to have this kind of critical counter-voice or counter-narrative out there.
Being professors at Northeastern has really given us the biggest platform to continue making the work, even if we don’t have funding for it specifically. We often will show in other countries and that’s a challenge for us, too. Just traveling there is expensive; it’s exhausting. And we don’t really have the same support systems that they do in the European Union.
For example, there’s a ton of funding for media art there. Culture and critical creative practices are really supported more there. That’s a challenge to be able to get our work out there and find ways to keep making the work. It’s been a challenge since the beginning.
CTM: You kind of talked about it throughout the interview, but something at the Center is that we like to keep up with what’s going on in the world. Are there any other recent US or world news that you’ve been thinking about that relates to your research?

Curry: We recently wrote a chapter about technocracy and technocrats and right as we were finishing it, it was revealed that Starbase, Elon Musk’s city in Texas, is a technocratic city, where all of the positions in the local government are filled by higher-level employees of the company, so they are all technocrats.
Gradecki: Yeah, that’s a good point. The idea behind this technocratically run city Starbase is that SpaceX can now accelerate their ability to go to Mars, and we want to bring a critical approach to analyzing the kind of power that people are amassing when they have these technological areas of expertise and warn people about that.
Curry: Yeah, and it extends beyond these governmental structures. Like you could say that Facebook is a technocratic structure because it alone has the authority to moderate what can and can’t be done on their platform, who’s allowed to be on it, who’s pushed off—what can be said and how it can be said is shaped by the platform. There was a little bit of discourse around that earlier on, but then it became more obvious once Elon Musk bought Twitter specifically to control the conversations that could be had on the platform.
Gradecki: Yeah. And another thing that it’s been coming up in the news a little bit, but we’re always looking for it again because we’re looking at technocratic power structures is all of the funding that has been going into Palantir recently, which is a defense contractor in Silicon Valley and Peter Thiel and Alex Karp, the people behind this corporation want to create a technological republic. They want Silicon Valley to move back to its roots, where it’s taking a lot of Department of Defense funding to support the military, to support things like automated drone warfare, which we’re concerned about.
Curry: Yeah. Alex Karp even wrote a book called The Technological Republic, where he outlines what a system of government based on that would look like. Again, we’re not speculating, we’re just trying to present these ideas in a way that is easier for people to engage with.
Gradecki: Yeah, because I think sometimes when you’re talking about power and control and sociotechnical systems, it can seem abstract. We’re always looking for a way to make it concrete, to make it something for people to connect with.
CTM: What effect do you hope your research will have on society?
Gradecki: We could talk about that on a case-by-case basis. But when looking at surveillance systems, we are asking questions like: How can these systems fail? How can they be abused? How are they being used? We hope to facilitate a deeper understanding of these systems in practice.
Curry: As far as the viewer experience, we’re trying to get people to see themselves within the system, with our system being sort of like a sandbox for these larger systems in the world. Like what is their role in it?
And we are often putting people in a position of power where in real life they’re not. Ideally in a situation like that, afterwards they would reflect on this power structure and the lack of agency that they have. That is, after they were just given a certain amount of fictional agency over the system. We ultimately hope people get a type of self-reflection on their position within a larger socio-technical system.

CTM: What are the next steps you’re hoping to take for your research projects?
Gradecki: Well, we’re happy to say that we’re going to be on sabbatical next semester. That’ll be a great moment for us to focus on continuing to develop “Generative Persuasion.” For one, we have a couple of exhibitions lined up for that, so we’re developing that as an installation. Then we’ve been developing a video involving generative A.I. It’s made with generative A.I in every aspect of it, the images, the audio, the narrative, and the narrator: it’s called the Cabinet of Cognition. We’re going to debut that soon.
Curry: We just did an interview with a festival in San Diego who might let us actually show the Epic Sock Puppet Theater in the U.S. That project’s been kind of weird because our data sets are largely US-centric and institutions in the US have not been open to it yet.
I always get this sense that when we’re showing it in Europe and places like that, people are just like, ‘oh, Americans, they’ll believe anything,’ because the majority of the examples in the project are from the US and many of the examples are ridiculous. So, we haven’t had the best audience for it yet, because we really want people to reflect on their own behavior. Although there is a lot of disinformation in and around like the war in the Ukraine and things like that, which people can identify with there.
Gradecki: That’s another challenge that we often face, is just getting access to a data set. It used to be easy to get a Twitter data set, and they’ve closed up all researcher access. A lot of the data that was previously out there isn’t, so our research became more difficult in that way.
It is really challenging to stay on top of these new technologies because a lot of times for a new project, we’ll have to learn something completely different than we had previously known, either theoretically or technologically. But for us, it’s important that we’re engaging directly through technology and with it. And I think it is a strength of our work, that we’re able to actually show people how things operate, but it is a challenge.