That creates a problem of how to first piece of art came from.
They looked at nature and crudely copied that. They didn’t start drawing Mickey Mouse on day one.
That creates a problem of how to first piece of art came from.
They looked at nature and crudely copied that. They didn’t start drawing Mickey Mouse on day one.
Utter nonsense. Have you ever looked at the history of art? It’s all a slow incremental crawl based on previous efforts. Nothing comes from nothing.
and a limited FOV.
Not for movies. Modern VR headsets have around 100°, comfortable movie viewing distances only needs 30-60° (+ a couple degree for head movement). The resolution is a far bigger problems, with VisionPro being the first one that can do about 1080p at 50° FOV. Most other headsets are stuck with 720p or below when they emulate 2D display.
Also VR can effortlessly do 3D movies and Apple is the first to actually offer them out of the box, finding those for other headsets has always been a huge struggle (i.e. piracy or ripping them yourself).
One thing I haven’t yet seen on VisionPro is if it has any form of multiplayer. Watching movies together with other people (VRChat, BigScreen), was one of the more interesting things VR can do, so far VisionPro looks like a single-player device. Outside of video calls, I have seen no indication that it has full avatars or how it behaves when multiple people in the same room wear a VisionPro.
The crux is that they went “draw me a cartoon mouse” and Midjourney went “here is Disney’s Mickey Mouse™”. A simple prompt should not be able to generate that specific of an image. If you want something specific, you should need to specific it, otherwise the AI failed to generalize or is somehow heavily biased towards existing images.
It’s a bit different for MidjourneyV6, previous AI models would create their own original images based on patterns learned from the data. MidjourneyV6 on the other side reproduces the original images to such a degree where they look identical to the originals for the average observer, you have to see them side by side to even spot the differences at all. DALLE3 has that problem as well, but to a much lesser degree.
That means there is something going wrong in the training, e.g. some images end up being duplicated so often in the training data that the AI remembers them completely. Normally that should be reduced or avoided by filtering out duplicate images, but that seems to not be happening or the images slip through due to small changes (e.g. size or crop will be different on different websites).
Note this doesn’t just impact exact duplication, it also impacts remixing, e.g. when you tell it to draw Joker doing some task, you’ll get Joaquin Phoenix’s Arthur Fleck, not some random guy with clown features.
All of this happens with very simple prompts that do not contain all those very specific details.
In AI’s defense: All the examples I have seen so far are from press releases of movie stills. So they naturally end up getting copied all over the place and claiming copyright violation for your own material that you released to be reused by the press wouldn’t fly either. But either way, Midjourney is still misbehaving here and needs to be fixed.
More broadly speaking, I think it would be a good time to move away training those AI almost exclusively on images and start training them on video. Not just to be able to reproduce video, but so that the AI get a more holistic understanding of how the world works. At the moment all its knowledge is based on deliberate photo moments and there are very large gaps in its understanding.
Completely impractical. If something is AI generated, or manipulated with Photoshop or in the darkroom really doesn’t make a difference. AI isn’t special here, photo manipulation is about as old as the photograph itself. It would be much better to spend some effort into signing authentic images,including a whole chain of trust up to the actual camera. Luckily the Content Authenticity Initiative is already working on that.
And the real strength of Reddit isn’t the huge subreddits anyway, they are mostly just trash. It’s all the niche communities, most of them haven’t moved away, or if they have, they moved to Discord.
It’s not about the ads on Google.com, but the ads on the SEO sites themselves, those are also served by Google. Ever heard of DoubleClick? That’s also Google. If Google Search would give you good clean non-commercial sites without ads, they would lose money.
It’s much simpler than that. Google is an ad company, not a search company. SEO spam gives them ad clicks just the same as quality content, if not more so. As long as they don’t became bad enough that everybody switches to the competition, they simply don’t have to care.
What I find even more mind boggling is that despite all that tracking, advertising still misses the mark by a mile. I regularly see the same ad repeated 10 times in a row while also being completely irrelevant to me. Meanwhile I also frequently miss stuff that would be relevant for me and that should be covered by ads (e.g. movie releases, I might pick up the first trailer, but completely miss when the movie actually hits cinemas).
For the money and effort spend on ads you’d think they could do a lot better than what they are.
deleted by creator
Proper data model would be a start, i.e. public-key based identities instead of just the old name@server. That way you could hop from server to server and still be the same account. Would make the whole thing a hell of a lot more robust, as in case of server failure could just continue as if nothing happened on another server.
Which is why blind people are so amazing at drawing…
You are recombining pattern you have seen before, “crazy”, “foam”, “monster”, those all have a certain look that your brain got trained on, you are simply remixing them. An AI can do exactly the same. The fact that there are words for those concepts should be enough to tell you that those ideas are not original.