Skip to content(if available)orjump to list(if available)

Voyager – An interactive video generation model with realtime 3D reconstruction

Ragnarork

The license used for this is quite a read.

  Available to the world except the European Union, the UK, and South Korea
Not sure what led to that choice. I'd have expected either the U.S. & Canada to be in there, or not these.

  3. DISTRIBUTION.
  [...]
  c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan”; [...]
What's that doing in the license? What's the implications of a license-listed "encouragement"?

NitpickLawyer

> Not sure what led to that choice.

It's the EU AI act. I've tried their cute little app a week ago, designed to let you know if you comply, what you need to report and so on. I got a basically yes, but likely no, still have to register to bla-bla and announce yak-yak and do the dooby-doo, after selecting SME - open source - research - no client facing anything.

It was a mess when they proposed it, it was said to be better while they were working on it, turns out to be as unclear and as bureaucratic now that it's out.

flanked-evergl

If I was Russia and/or China and I wanted to eliminate EU as a potential rival economically and militarily, then I don't think I could have come up with a better way to do it than EU regulations. If it was not for the largess of the US, then EU would become a vassal of Russia and/or China. And I think the US is running out of good will very rapidly. The EU could, of course, shape up, but it won't.

cedilla

It's hard not to react sarcastically to this. But I will try:

There's nothing special about EU regulations vis-a-vis other laws. China, Russia and the US also have laws, many of which are also perceived as overly bureaucratic.

OtherShrezzing

I think you have an over-aggrandised opinion of Russia's geopolitical, military, and economic power.

Cthulhu_

I'd rather be free and my data safe than be an economic world leader. False dichotomy, I know, but I don't mind the people before money mindset.

viccis

>then EU would become a vassal of Russia

Russia is currently struggling to make inroads on invading its relatively small neighbor, so I really doubt it would be able to make a bunch of nuclear powers who have a nuclear alliance its "vassal"

I understand that Russia's not fighting just Ukraine but rather Ukraine with massive US and EU assistance but my point still stands.

bee_rider

We’ll see if these LLMs end up having a real use, once the “giving away investor money” business model dries up. They really might! But it seems early to say that the EU has missed out on anything, before we see what the thing is.

In general, it is hard to compare the US and the EU; we got a head start while the rest of the world was rebuilding itself from WW2. That started up some feedback loops. We can mess up and siphon too much off a loop, destroying it, and still be ahead. They can be setting up loops without benefitting from them yet.

jimbokun

They seem to be improving a lot on their defense spending, at least.

Will take them a while to get out from under the US umbrella. But acknowledging the problem is the first step.

lawlessone

>If I was Russia and/or China and I wanted to eliminate EU as a potential rival economically and militarily, then I don't think I could have come up with a better way to do it than EU regulations.

Personally I'm not too worried anyone is going to become a global superpower from generative AI slop.

suddenlybananas

The EU is a vassal of the US, that is its entire raison d'être.

llbbdd

I'm amazed that they could pull themselves together enough to publish an app at all.

kookamamie

At the same time Time selected Henna Virkkunen on their AI 200 list: https://time.com/collections/time100-ai-2025/7305860/henna-v... - they are one of the architects of this AI Act nonsense.

whimsicalism

i don’t think it is incorrect to select an architect of this regulation as one of the most influential people on AI

flanked-evergl

EU is fully invested into virtue signalling over actual tangible results. People keep saying how much stronger EU's economy is than Russia's, and how Russia is basically a gas station with Nukes, but the thing is, even with EU's "strong" economy Russia has them by the balls. They have to go hat in hand begging the US to step in because they can't do anything themselves, and the US is not going to keep propping up EU long term, especially not with how hostile the Europeans are towards Americans.

I live in Europe, I don't want Europe to become a vassal of China/Russia - but if something drastically does not change it will. Russia is Europe's Carthage, Russia must fall. There is no future with a Russia as it is today and a Europe as it is today in it, not because of Europe, but because of Russia. If Europe does not eliminate Russia, Russia will eliminate Europe. I have no doubts about this.

But as things stand, there just seems no way in which we practically can counter Russia at all. If Europe had determination, it would have sent Troops into Ukraine and created a no-fly zone — it should do that, but here we are.

L_226

Which app is that?

NitpickLawyer

Check here - https://artificialintelligenceact.eu/assessment/eu-ai-act-co...

Start on the right, and click through the options. At the end you'll get a sort of assessment of what you need to do.

crimsoneer

I mean, neither the UK nor South Korea are in the EU, nor does it have equivalent laws. I suspect ongoing push from US and China that nobody has the right to be involved in AI regulation that isn't them and just general vibes.

jonas21

South Korea has a number of unusual regulations, including extremely strict restrictions on spatial data [1] and an AI law that, among other things, requires foreign companies to have a representative physically in South Korea to answer to the government [2]. So it's not too surprising to see it on the list.

[1] https://en.wikipedia.org/wiki/Restrictions_on_geographic_dat...

[2] https://cset.georgetown.edu/publication/south-korea-ai-law-2...

NitpickLawyer

> nor does it have equivalent laws

The UK has their chat thing where if you provide chat (even with bots!) you have to basically be a megacorp to afford the guardrails they think "the kids" need. It's not clear if open source models fall into that, but who's gonna read 300+ pages of insanity to make sure?

mushufasa

The EU and others listed are actively trying to regulate AI. Permissive OSS libraries' "one job" is to disclaim liability. This is interesting that they are just prohibiting usage altogether in jurisdictions where the definition of liability is uncertain & worrying to the authors.

amelius

That would be an extremely lazy way of writing a license.

jandrewrogers

Unlikely laziness, since they went to the effort of writing a custom license in the first place.

A more plausible explanation is the requirements and obligations of those markets are ambiguous or open-ended in such a way that they cannot be meaningfully limited by a license, per the lawyers they retain to create things like licenses. Lawyers don’t like vague and uncertain risk, so they advised the company to reduce their risk exposure by opting out of those markets.

nickpsecurity

It's a careful way of running a business with potential users in highly-regulated markets. They don't know their regulations or laws. They don't want to invest labor in complying with them.

So, they reduced their liability by prohibiting usage of the model to show those jurisdictions' decision makers they were complying. I considered doing the same thing for EU. Although, I also considered one mught partner with an EU company if they are willing to make models legal in their jurisdiction. Just as a gift to Europeans mainly but maybe also a profit-sharing agreement.

b3lvedere

"You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent HunyuanWorld-Voyager Works; and (ii) mark the products or services developed by using the Tencent HunyuanWorld-Voyager Works to indicate that the product/service is “Powered by Tencent Hunyuan” "

Is this the new 'please like and subscribe/feed us your info' method?

wkat4242

I wonder if you can still download and use it here in the EU.. I don't care about licensing legalese, but I guess you have to sign up somewhere to get the goods?

notpushkin

wkat4242

Thanks!! I saw there wasn't all that much on github so I missed that part.

whimsicalism

EU has very difficult AI and data regulations, not sure about South Korea

NullCascade

Maybe private Chinese AI labs consider EU/UK regulators a bigger threat than US anti-China hawks.

BryanLegend

So in this case is North Korea more free than South Korea?

aaron695

[dead]

stargrazer

It explicitly says using a single picture. Wouldn't the world become even more expressive if multiple pictures could be added, such as in a photogrammetry scenario?

btbuildem

I had the same question!

I will have to try this, I have a super edge use case: incomplete bathymetric depth map (lidar boat could not access some areas), coincidentally the most interesting areas are not in the data. My second piece of data is from flyover video (areas of interest where water is also clear enough to see the bottom). With enough video I can mostly remove the water-borne artifacts (ripples, reflections etc) and enhance the river bottom imagery enough to attempt photogrammetric reconstruction. The bottleneck here is that it takes multiple angles to do that, and the visibility through water is highly dependent on the angle of sunlight vs angle of camera.

Instead of doing multiple flyovers at different times of day to try and get enough angles for a mesh reconstruction, maybe this can do it relatively well from one angle!

loudmax

This does sound interesting, but is generative AI the right tool for this use case? A generative AI model sounds great for making a video game or even exploring historical photos, where introducing invented artifacts is a feature not a bug. In your case, wouldn't hallucinations be a problem?

btbuildem

I agree with you that it would be "made up" content, but I don't know how else to fill in the missing data. The area not scanned by LiDAR is just upstream from and directly beneath a set of whitewater rapids.

I can guesstimate the shape of the bottom by the behaviour of the flow, and hand-model the missing parts of the mesh. I thought outsourcing that to a generative model would be a nice shortcut -- and who knows, likely it'll synthesize it more true-to-nature than I would.

Miraste

That sounds quite interesting. Why are you trying to reconstruct a river bottom?

btbuildem

The shape river bottom causes a few standing waves / rapids to form. I am fascinated by it and want to better understand the hows and whys of it.

llbbdd

I'm also very curious. Searching for missing persons? Buried treasure?

ilaksh

There are other models that do that, such as photogrammetry models.

But someone could possibly extend the work so it was a few photos rather than one or many. The way you ask the question makes it sound like you think it was a trivial detail they just forgot about.

forinti

In 1995 I went to a talk on Image Processing by an Indian professor. I asked him if there were any methods for improving low resolution images, just to make them look better (I think this was in the context of TV transmissions). He said you couldn't make up information.

Well, 30 years later, you can generate a video from a photograph.

IanCal

Also you can get a lot more information from images than you think, and even more from video. Superresolution was the term iirc.

You can’t make up information but you can use knowledge of the subject to accurately fill things in and other assumptions to plausibly fill things in.

Terr_

While there's been a lot of technological progress, I think that story confuses different meanings of "could" and "information".

From a photo of someone's face and shoulders, a child can add "information" by extending it to a stick-figure body with crayons. However it's not information from the original event that was recorded.

Then there's the difference between strictly capable versus permissible or wise. A researcher "can't" make up data, a journalist "can't" invent quotes, a US President "can't" declare himself dictator, etc.

null

[deleted]

iamsaitam

Interesting that they chose the color red in the comparison table to determine the best score of that entry.

FartyMcFarter

Just like the stock market in China. Red means the price is going up, green means it's going down.

jsheard

That's also why the stonks-going-up emoji traditionally has a red line, Japan shares that convention.

https://blog.emojipedia.org/why-does-the-chart-increasing-em...

dlisboa

By the way, people might think this has to do with communism but it’s cultural and way before the 20th century. Red is associated with happiness and celebration.

MengerSponge

Almost like the communists chose what iconography to use!

jjcm

As already mentioned, red is a positive color in east Asia. What's actually more surprising to me is that yellow is the 3rd color after green.

It's interesting to me that this breaks convention with the visual spectrum.

IE

red ~700nm

green ~550nm

yellow ~580nm

Weird that they aren't in order.

Cthulhu_

Cultural differences, as others have pointed out; I find it fascinating. And also it doesn't impact my day at all.

idiotsecant

It would be a very uninteresting choice in china. Color is partially a cultural construction. Red doesn't mean the same thing there that it does in the west.

geeunits

You'll notice it in every piece of western propaganda too. From movies to fashion. Red is the china call

bilsbie

I’m waiting like crazy for one of these to show up on vr.

kridsdale1

Check out visionOS 26’s Immersive Photo mode. Any photo in your iCloud library gets converted by an on device model to (I assume) a Gaussian Splat 3D scene that you can pan and dolly around in. It’s the killer feature that justifies the whole cost of Vision Pro. The better the source data the better it works.

I can literally walk in to scenes I shot on my Nikon D70 in 2007 and they, and the people, look real.

bee_rider

That is neat.

Although, I can think of some old family photos where half the people in them are dead by now (nothing catastrophic, just time). I wonder how it would feel to walk around in that sort of photo.

jsheard

Please don't hold your breath, they're still pretty far from high-res 120fps with consistent stereo and milliseconds of latency.

geokon

Isn't it picture to 3D model? You'd generate the environment/model ahead of time and then "dive in" to the photo

jsheard

I suppose that's an option yeah, but when people envision turning this kind of thing into a VR holodeck I think they're expecting unbounded exploration and interactivity, which precludes pre-baking everything. Flattening the scene into a diorama kind of defeats the point.

jimmySixDOF

While discussing Google Genie v3 and AndroidXR, Bilawal Sidhu said : "to create an even faster, lower latency pipeline to go from like 24 fps to like 100 fps. I could see that being more of an engineering problem than a research one at this point."

https://youtu.be/VslvofY16I0&t=886

x187463

Based on just about every Two Minute Papers video, the engineering/research attack the latency from both sides. The hardware grants steady improvements and an occasional paper is published with a new/improved approach that decimates the compute required.

dannersy

That would be the most motion sickness inducing thing you could possible do in its current state. The fov on these videos is super wonky.

chamomeal

So could it actually turn around, like a full 360, and the image would stay the same? It looks super cool but the videos I saw just pan a little one way or the other

tzumaoli

It could in theory. The model generates a depth image per frame, so each pixel becomes a small 3D point. It also assumes that the 3D scene is static. From this, you can then simply register all the frames into a huge 3D point cloud by unprojecting the pixels to 3D and render it anyway you like (using a classical 3D renderer) and it will be consistent.

Though, a problem is that if the generated video itself has inconsistent information, e.g., the object changes color between frames, then your point cloud would just be "consistently wrong". In practice this will lead to some blurry artifacts because you blend different inconsistent colors together. So when you turn around you will still see the same thing, but that thing is uglier and blurrier because it blends between inconsistent coloring.

It will also be difficult to put a virtual object into the generated scene, because you don't have the lighting information and the virtual object can't blend its color with the environment well.

Overall cool idea but obviously more interesting problems to be solved!

maelito

What I'm interested with is taking Panoramax pictures (a free StreetView alternative) and recreate 3D navigable scenes from them.

londons_explore

> The minimum GPU memory required is 60GB for 540p.

We're about to see next gen games requiring these as minimum system requirements...

ambitiousslab

This is not open source. It is weights-available.

Also, there is no training data, which would be the "preferred form" of modification.

From their license: [1]

  If, on the Tencent HunyuanWorld-Voyager version release date, the monthly active users of all products or services made available by or for Licensee is greater than 1 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.

  You must not use the Tencent HunyuanWorld-Voyager Works or any Output or results of the Tencent HunyuanWorld-Voyager Works to improve any other AI model (other than Tencent HunyuanWorld-Voyager or Model Derivatives thereof).
As well as an acceptable use policy:

  Tencent endeavors to promote safe and fair use of its tools and features, including Tencent HunyuanWorld-Voyager. You agree not to use Tencent HunyuanWorld-Voyager or Model Derivatives:
  1. Outside the Territory;
  2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
  3. To harm Yourself or others;
  4. To repurpose or distribute output from Tencent HunyuanWorld-Voyager or any Model Derivatives to harm Yourself or others; 
  5. To override or circumvent the safety guardrails and safeguards We have put in place;
  6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
  7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
  8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
  9. To intentionally defame, disparage or otherwise harass others;
  10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
  11. To generate or disseminate personal identifiable information with the purpose of harming others;
  12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
  13. To impersonate another individual without consent, authorization, or legal right;
  14. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
  15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
  16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
  17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
  18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
  19. For military purposes;
  20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
[1] https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob...

vintermann

The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.

heod749

>The exclusion of EU, UK and South Korea suggests to me they've trained on data those countries would be mad they trained on/would demand money for training on.

Or, those countries are trying to regulate AI.

Hard to feel bad for EU/UK. They tried their best to remain relevant, but lost in the end (talent, economy, civil rights).

wkat4242

Why do you think regulation is bad?

We didn't regulate adtech and now we're stuck with pervasive tracking that's hurting society and consumer privacy. Better to be more cautious with AI too so we can prevent negative societal effects rather than trying to roll them back when billions of euros are already at play, and thus the corporate lobby and interests in keeping things as they are.

We didn't regulate social media algorithms which started optimising for hate (as it's the best means of "engagement") and it led to polarisation in society, the worst effects of which can be seen in the US itself. The country is tearing itself apart. And we see the effects in Europe too. Again, something we should have nipped in the bud.

And the problem isn't mainly the tech. It's the perverse business models behind it, which don't care about societal diruption. That's pretty hard to predict, hence the caution.

thrance

Peak American thinking: megacorps and dictatorships stealing data with no respect whatsoever for privacy and not giving anything back is good. Any attempt to defend oneself from that is foolish and should be mocked. I wish you people could realize you're getting fucked over as much as the rest of us.

NitpickLawyer

> This is not open source. It is weights-available.

> Also, there is no training data, which would be the "preferred form" of modification.

This is not open source because the license is not open source. The second line is not correct, tho. "Preferred form" of modification are weights, not data. Data is how you modify those weights.

stefan_

Thats a very novel (and obviously wrong) interpretation of preferred form. The full sentence is "preferred form of modification" and obviously weights don't allow that.

tbrownaw

> Also, there is no training data, which would be the "preferred form" of modification.

Isn't fine-tuning a heck of a lot cheaper?

Nevermark

Fine tuning with original data plus fine tuning data has more predictable results.

Just training on new data moves a model away from its previous behavior, to an unpredictably degree.

You can’t even reliably test for the change without the original data.

htrp

outside of ai2, not sure anyone actually truly is open-source ai models (training logs, data etc).

I think at this point, open source is practically shorthand for weights available

imiric

> 7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;

> 8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;

"Do as I say, not as I do."

> 15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;

This, and other clauses, effectively prohibit the use of this system within any jurisdiction.

What a ridiculous policy.

NullCascade

What is currently the best model (or multi-model process) to go from text-to-3D-asset?

Ideally based on FOSS models.

neutronicus

Piggybacking ... what about text-to-sprite-sheet? Or even text-and-single-source-image-to-sprite-sheet?

nzach

I've never done this task specifically, but I imagine the new google model (Gemini 2.5 Flash Image) is what you want. It has really good character consistency, so you should be able to paste a single sprite and ask it to generate the rest.

SXX

This is possible, but mostly for generating assets in the somewhat same style that you already have. Problem is that AI models are not good at tracking state of multiple entities on one image including 2.5 Flash Image.

If you actually want something consistent you should really generate images one by one and provide extensive description of what you expect to see on each frame

And if you want to make something like animation it's only really possible if you basically generate thousand of "garbage" images and then edit together what fits.

geokon

Seems the kind of thing StreetView data would have been perfect to train on.

I wonder if you could loop back the last frame of each video to extend the generated world further. Creating a kind of AI fever dream

kridsdale1

Why the past tense? Google is holding on to all of that, going back years.

Cthulhu_

Yeah, they have all the raw data (Google is a self-confessed data hoarder, after all), I'm sure they have research projects where they use AI and similar to stitch street view images together.

I also wouldn't be surprised if their Street View cars / people record video instead of stills these days. Assuming they started capturing stuff in 2007 (and it was probably a lot earlier), storage technology has improved at least tenfold in terms of storage (probably more), video processing too.

forrestthewoods

Spin the camera 1080 degrees in place you cowards!!

These clips are very short and don’t rotate the camera more than like 45 degrees. Genie3 also cheats and only rotate the camera 90 degrees.

It’s always important to pay attention to what models don’t do. And in this case it’s turn the bloody camera around.

I refuse to accept any model to be a “world model” if it can’t pass a simple “spin in place” test.

Bah hum bug.