this post was submitted on 25 Feb 2025
241 points (97.3% liked)

movies

2503 readers
603 users here now

Matrix room: https://matrix.to/#/#fediversefilms:matrix.org

Warning: If the community is empty, make sure you have "English" selected in your languages in your account settings.

🔎 Find discussion threads

A community focused on discussions on movies. Besides usual movie news, the following threads are welcome

Related communities:

Show communities:

Discussion communities:

RULES

Spoilers are strictly forbidden in post titles.

Posts soliciting spoilers (endings, plot elements, twists, etc.) should contain [spoilers] in their title. Comments in these posts do not need to be hidden in spoiler MarkDown if they pertain to the title’s subject matter.

Otherwise, spoilers but must be contained in MarkDown.

2024 discussion threads

founded 2 years ago
MODERATORS
 

James Cameron has reportedly revealed an anti-AI title card will open up Avatar 3, officially titled Avatar: Fire and Ash. The Oscar-winning director shared the news in a Q&A session in New Zealand attended by Twitter user Josh Harding.

Sharing a picture of Cameron at the event, they wrote: "Such an incredible talk. Also, James Cameron revealed that Avatar: Fire and Ash will begin with a title card after the 20th Century and Lightstorm logos that 'no generative A.I. was used in the making of this movie'."

Cameron has been vocal in the past abo6ut his feelings on artificial intelligence, speaking to CTV news in 2023 about AI-written scripts. "I just don’t personally believe that a disembodied mind that’s just regurgitating what other embodied minds have said – about the life that they’ve had, about love, about lying, about fear, about mortality – and just put it all together into a word salad and then regurgitate it," he told the publication. "I don’t believe that’s ever going to have something that’s going to move an audience. You have to be human to write that. I don’t know anyone that’s even thinking about having AI write a screenplay."

top 36 comments
sorted by: hot top controversial new old
[–] [email protected] 30 points 1 day ago (1 children)

James Cameron doesn’t do what James Cameron does for James Cameron. James Cameron does what James Cameron does because James Cameron is James Cameron!

[–] [email protected] 7 points 1 day ago* (last edited 1 day ago)

The bravest pioneer

[–] [email protected] 26 points 1 day ago

"It's ok, only the nice computer simulations helped us make this movie, not the bad ones!"

[–] [email protected] 41 points 1 day ago (1 children)

Unfortunately, the same can't be said for the 4K transfers of Aliens, True Lies, and The Abyss.

But I applaud the efforts nonetheless.

[–] [email protected] 45 points 1 day ago* (last edited 1 day ago) (2 children)

AI upscaling isn't the same thing as generative AI.

One makes the image a larger resolution just shifting pixels around. The other can create entirely new scenes.

[–] [email protected] 8 points 14 hours ago (1 children)

AI or "algorithm" upscaling fundamentally creates something out of nothing. That's what upscaling is, so it is generative because its quite literally generating "guesses" at what should be there, pixel by pixel.

Its literally the trope in movies where they're reviewing grainy security can footage and someone says "enhance" and its magically a crystal clear image. Its just that we have technology to do that now.

I'd agree there's a semantics argument that using AI for upscaling is different than creating new, but its just that semantics.

[–] [email protected] 2 points 6 hours ago

I think those semantics are really important during this time when creatives are at war with ai, so the public is aware of what is what, everything is lumped in together. There is a very big leap between blurry images on CSI TV shows figuring out license plates and taking something from let’s say 1080p and making it 4k. I use upscaling quite often in my line of work and we really do draw a line between something like Topaz and something like Adobe Firefly. Upscalers have also been around since maybe 2015, but they weren’t as advanced or popular to use back then.

[–] [email protected] 40 points 1 day ago* (last edited 1 day ago) (1 children)

I'm not sure if you're familiar with the AI upscales mentioned, but AI did a lot more than just "making it a larger resolution". It fundamentally altered and degraded certain visual aspects of the films.

[–] [email protected] 22 points 1 day ago

They used a shitty upscaler and got shitty results. Color me not surprised.

[–] [email protected] 14 points 1 day ago

Reminds me of seeing a disclaimer in the credits of animated movies saying no Motion Capture was used

[–] [email protected] 5 points 1 day ago (1 children)

2030's gonna be interesting

[–] Willy 1 points 22 hours ago

fingers crossed.

[–] [email protected] 7 points 1 day ago (1 children)

That's great, but don't forget to make it not suck ass. When a movie sucks ass, it's not fun to watch it. Like Avatar 2? That sucked ass. We waited longer than the Titanic was underwater for a sequel that was as warm as the water where Titanic rests. That sucks ass

[–] [email protected] 12 points 1 day ago (2 children)

I'm not sure you understand how time works.

[–] Kecessa 1 points 1 day ago

Avatar 2: Bro edition

[–] [email protected] 1 points 1 day ago

You don't understand facetious comments.

[–] [email protected] 6 points 1 day ago (2 children)

So would his stance change if we move past basic llms and have models that can generate coherent innovative ideas that were not learned?

[–] [email protected] 33 points 1 day ago (1 children)

‘Would his opinion of the technology be different if the technology was different?’

[–] [email protected] 9 points 1 day ago

Indeed, is his opinion based on the way the current technology works by regurgitating, or is it based on the loss of creative jobs?

[–] [email protected] 11 points 1 day ago (1 children)

Only when we can accurately point to any one idea that a human has had that hasn't been a product of previous information.

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago) (1 children)

With historian work, I think it's possible to say this idea appeared at about this point in time and space, even if it was refined by many previous minds. For example, you can tell about when an engineering invention or an art style appeared. Of course you will always have a specialists debate about who was the actual pioneer (often influenced by patriotism), but I guess we can at least have a consensus of when it starts to actually impact the society.
Also, maybe we can have an algorithm to determine if a generated result was part of the learning corpus or not.

[–] [email protected] 5 points 1 day ago (1 children)

But the idea is never original. The wheel likely wasn't invented randomly, it started as a rock that rolled down a hill. Fire likely wasn't started by a caveman with sticks, it was a natural fire that was copied. Expressionism wasn't a new style of art, it was an evolution that was influenced by previous generations. Nothing is purely original. The genesis of everything is in the existence of something else. When we talk about originality, we mean that these things haven't been put together this exact way before, and thus, it is new.

[–] [email protected] 4 points 1 day ago (2 children)

I don't disagree with your definition, but I'm not sure what it changes in the point of current LLMs lacking human creativity. Do you think there isn't anything more than a probabilistic regurgitation in human creativity so LLM already overcome human creativity, and it's just a matter of consideration?

[–] [email protected] 2 points 1 day ago (1 children)

I agree that humans are just flesh computers, but I don't know whether we can say LLMs have overcome human creativity because I think the definition is open to interpretation.

Is the intentionality capable only with metacognition a requirement for something to be art? If no, then we and AI and spiders making webs are all doing the same "creativity" regardless of our abilities to consider ourselves and our actions.

If yes, then is the AI (or the spider) capable of metacognition? I know of no means to answer that except that ChatGPT can be observed engaging in what appears to be metacognition. And that leaves me with the additional question: What is the difference between pretending to think something and actually thinking it?

In terms of specifically "overcoming" creativity, I don't think that kind of value judgement has any real meaning. How do you determine whether artist A or B is more creative? Is it more errors in reproduction leading to more original compositions?

[–] [email protected] 2 points 23 hours ago* (last edited 23 hours ago) (1 children)

As I suggested above, I would say creating a coherent idea or link between ideas that was not learned. I guess it could be possible to create an algorithm to estimate if the link was not already present in the learning corpus of an ML model.

[–] [email protected] 2 points 20 hours ago

I'm not sure how humans go about creating ideas, and therefore cannot be sure that the resulting ideas aren't a combination of learned things. There have been people in history who did things like guess that everything is made up of tiny particles long before we could ever test the idea, but probably they got the idea from observing various forms of matter, right? Like seeing how rocks can crumble into sand and grain can be ground to flour. I don't think they would have been able to come up with the idea in a vacuum. I think anything we're capable of creating must be based on things which we've already learned about, but I don't know that I can prove that.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

Human creativity, at it's core, is not original. We smush things together, package it as something new, and in our hubris call it "original" because we are human, and thus infallible originators. Our minds are just electrical impulses that fire off in response to stimuli. There is no divine spark, that's hogwash. From a truly scientific standpoint, we are machines built with organic matter. Our ones and zeros are the same as the machines we create, we just can't deal with the fact that we aren't as special as we like to think. We derive meaning from our individuality, and to lose that would mean that we aren't individual. However, we are deterministic.

If you woke up this morning and relived the same day that you already have, and had no prior knowledge of what had happened the previous time you experienced it, and no other changes were made to your environment, you would do the same thing that you did the first time, without fail. If you painted, you would paint the same image. If you ate breakfast, you would eat the same breakfast. How do we know this? Because you've already done it. Why does it work this way? Because nothing had changed, and your ones and zeros flipped in the same sequences. There is no "chaos". There is no "random". Nothing is original because everything is the way it is because of everything else. When you look at it from that bird's eye perspective, you see that a human mind making "art" is no different than an LLM, or some form of generative AI. Stimulus is our prompt, and our output is what our machine minds create from that prompt.

Our "black box" may be more obscure and complex than current technology is for AI, but that doesn't make it different any more than a modern sports car is different than a Model T. Both serve the same function.

[–] [email protected] 4 points 23 hours ago* (last edited 23 hours ago) (2 children)

From a truly scientific standpoint, we are machines built with organic matter. Our ones and zeros are the same as the machines we create, we just can’t deal with the fact that we aren’t as special as we like to think. We derive meaning from our individuality, and to lose that would mean that we aren’t individual. However, we are deterministic.

Would you have some scientific sources about the claim that we think in binary and that we are deterministic?

I think you may be conflating your philosophical point of view with science.

[–] [email protected] 2 points 21 hours ago* (last edited 20 hours ago)

All Turing-complete modes of computation are isomorphic so binary or not is irrelevant. Both silicon computers and human brains are Turing-complete, both can compute all computable functions (given enough time and scratch paper).

If non-determinism even exists in the real world (it clashes with cause and effect in a rather fundamental manner) then the architecture of brains, nay the life we know in general, actively works towards minimising its impact. Like, copying the genome has a quite high error rate at first, then error correction is applied which brings the error rate down to practically zero, then randomness is introduced in strategic places, influenced by environmental factors. When the finch genome sees that an individual does not get enough food it throws dice at the beak shape, not mitochondrial DNA.

It's actually quite obvious in AI models: The reason we can quantise them, essentially rounding every weight of the model to be able to run them with lower-precision maths so they run faster and with less memory, is because the architecture is ludicrously resistant to noise, and rounding every number is equivalent to adding noise, from the perspective of the model.

[–] [email protected] 2 points 23 hours ago (1 children)

The deterministic universe is a theory as much as the big bang. We can't prove it, but all of the evidence is there. Thinking in binary is me making a point about how our minds interact with the world. If you break down any interaction to its smallest parts, it becomes a simple yes/no, or on/off, we just process it much faster than we think about it in that sense.

[–] [email protected] 3 points 21 hours ago (1 children)

There are various independent reproducible measurements that give weight to the hot big bang theory as opposed to other cosmological theories. Are they any for the deterministic nature of humans?
Quantum physic is not deterministic, for example. While quantum decoherence explains why macro physical systems are deterministic, can we really say it couldn't play a role in our neurons?
On a slightly different point, quantum bits are not binary, they can represent a continuous superposition of multiple states. Why would our mind be closer to binary computing rather than quantum computing?

[–] [email protected] 0 points 21 hours ago* (last edited 21 hours ago) (1 children)

The comparison between human cognition and binary isn't meant to be taken literally as "humans think in 1s and 0s" but rather as an analogy for how deterministic processes work. Even quantum computing, which operates on superposition, ultimately collapses to definite states when observed—the underlying physics differs, but the principle remains: given identical initial conditions, identical outcomes follow.

Regarding empirical evidence for human determinism, we can look to neuroscience. Studies consistently show that neural activity precedes conscious awareness of decisions (Libet's experiments and their modern successors), suggesting our sense of "choosing" comes after the brain has already initiated action. While quantum effects theoretically could influence neural firing, there's no evidence these effects propagate meaningfully to macro-scale cognition—our neural architecture actively dampens random fluctuations through redundancy.

The question isn't whether humans operate on binary code but whether the system as a whole follows deterministic principles. Even if quantum indeterminacy exists at the micro level, emergence creates effectively deterministic systems at the macro level. This is why weather patterns, while chaotic, remain theoretically deterministic—we just lack perfect information about initial conditions.

My position isn't merely philosophical—it's the most parsimonious explanation given current scientific understanding of causality, neuroscience, and complex systems. The alternative requires proposing special exemptions for human cognition that aren't supported by evidence.

[–] [email protected] 1 points 9 hours ago (1 children)

Even quantum computing, which operates on superposition, ultimately collapses to definite states when observed—the underlying physics differs, but the principle remains: given identical initial conditions, identical outcomes follow.

I think this is incorrect, it does collapse to definitive state when observed, but the value of the state is probabilistic. We make it deterministic by producing s large number of measurements and deciding on a test on the statistical distribution of all the measurement to get a final value. Maybe our brain also does a test on a statistic of probabilistic measurements, or maybe it doesn't and depends directly on probabilistic measurements, or a combination of both.

we just lack perfect information about initial conditions.

We also lack fully proven equations or complete resolution of equations in fluid dynamics.

I think parsimony is very much based on personal opinion at this point of knowledge.

[–] [email protected] 2 points 7 hours ago

You're right about quantum measurement—I oversimplified. Individual quantum measurements yield probabilistic outcomes, not deterministic ones. My argument isn't that quantum systems are deterministic (they're clearly not at the individual measurement level), but rather that these indeterminacies likely don't propagate meaningfully to macro-scale neural processing.

The brain operates primarily at scales where quantum effects tend to decohere rapidly. Neural firing involves millions of ions and molecules, creating redundancies that typically wash out quantum uncertainties through a process similar to environmental decoherence. This is why most neuroscientists believe classical physics adequately describes neural computation, despite the underlying quantum nature of reality.

Regarding fluid dynamics and weather systems, you're correct that our incomplete mathematical models add another layer of uncertainty beyond just initial conditions. Similarly with brain function, we lack complete models of neural dynamics.

I concede that parsimony is somewhat subjective. Different people might find different explanations more "simple" based on their background assumptions. My deterministic view stems from seeing no compelling evidence that neural processes harness quantum randomness in functionally significant ways, unlike systems specifically evolved to do so (like certain photosynthetic proteins or possibly magnetoreception in birds).

The question remains open, and I appreciate the thoughtful pushback. While I lean toward neural determinism based on current evidence, I acknowledge it's not definitively proven.

[–] [email protected] 4 points 1 day ago (1 children)

if it's not hand animated on 1s I care about as much as the last 2

[–] [email protected] 2 points 8 hours ago* (last edited 8 hours ago)

It is hand animated on 1's. On 0's too