Testing AI controlling AI with Hy3 Preview I barely lifted a finger the whole time.
One-click deployment of Hermes on WorkBuddy took some time with a few rounds of adjustments, and I finally got it up and running smoothly
Only minor issue was setting up Supermemory it was a bit slow on the uptake. I had to go over simple steps several times, guiding it patiently like teaching a kid.
The experience of AI orchestrating AI is absolutely incredible. started running Agents with Hunyuan right after its release, and it actually works perfectly.
295B parameters, 21B active parameters, with direct access to TokenHub now great cost-performance ratio too
Honestly, I used to get stuck on all kinds of environment configurations when deploying Agents locally. Using Hy3 to take command made the whole process way more streamlined.
Hands-on testing of HY-World 2.0 shows a significant improvement in end-to-end engineering maturity compared to version 1.5
The model supports direct multimodal input from text, single-frame images, and video. Inference can be launched without camera intrinsic/extrinsic calibration or additional preprocessing
After panorama generation, the built-in Spatial Agent automatically performs semantic navigation path planning. Combined with spatial consistency constraints from HY-WorldStereo, it ensures artifact-free multi-view generation and stable geometric alignment
Outputs include standard 3D asset formats such as Mesh, 3DGS, and point clouds, which can be directly imported into Unity/UE
It is suitable for engineering scenarios including game level prototyping, digital twins, and embodied simulation