Synthetic media using in 3D

AI geenrate 3D model

  1. Game development
  2. E-commerce
  3. product design

DreamFusion by Google Research

DreamFusion: Text-to-3D using 2D Diffusion

How does DreamFusion work?

Given a caption, DreamFusion uses a text-to-image generative model called Imagen to optimize a 3D scene. We propose Score Distillation Sampling (SDS), a way to generate samples from a diffusion model by optimizing a loss function. SDS allows us to optimize samples in an arbitrary parameter space, such as a 3D space, as long as we can map back to images differentiably. We use a 3D scene parameterization similar to Neural Radiance Fields, or NeRFs, to define this differentiable mapping. SDS alone produces reasonable scene appearance, but DreamFusion adds additional regularizers and optimization strategies to improve geometry. The resulting trained NeRFs are coherent, with high-quality normals, surface geometry and depth, and are relightable with a Lambertian shading model.

https://dreamfusion-cdn.ajayj.com/dreamfusion_overview.mp4

https://arxiv.org/pdf/2209.14988

Point-E by OpenAI

image.png

https://github.com/openai/point-e

Point-E: A System for Generating 3D Point Clouds from Complex Prompts

Kaedim

https://www.youtube.com/watch?v=tlQWFG1DLb0