Meta has released a groundbreaking AI research that can generate three-dimensional objects using text commands with high precision in all dimensions, significantly faster than previous research efforts. This model, known as Meta 3D Gen (3DGen), is capable of producing 3D objects from text prompts in less than a minute, supporting physically based rendering (PBR) for seamless integration with various 3D software programs.
The operation of 3DGen involves two main steps: Meta 3D AssetGen converts text into 3D objects (text-to-3D), while Meta 3D TextureGen generates textures for the objects (text-to-texture). When combined, these two components produce highly detailed three-dimensional objects.
TLDR: Meta’s new AI research, Meta 3D Gen (3DGen), can swiftly create detailed three-dimensional objects from text prompts, revolutionizing the field of 3D design.
Leave a Comment