I graduated from Jilin University in 2023 with a Bachelor degree in computer science.
During undergraduate, I also interned at Zhejiang Lab, Huawei and ByteDance.
I am also a member of the PaperABC team on the Bilibili video platform, responsible for sharing cutting-edge research in 3D AIGC.
Welcome to keep following us!
My research interests lie in computer graphics and computer vision. I am particularly focused on 3D content generation and world model. Some of my papers are highlighted.
We propose OmniStyle2 that reframes artistic style transfer through destylization, a process that removes stylistic elements from artworks to recover clean, style-free content. By constructing the large-scale DST-100K dataset with this approach, we enable a simple feed-forward model that consistently surpasses state-of-the-art methods.
We propose TexVerse, a large-scale 3D dataset featuring high-resolution textures. TexVerse collects over 858K unique high-resolution 3D models sourced from Sketchfab, including more than 158K models with physically based rendering (PBR) materials.
We propose DecoupledGaussian that separates static objects from their contacted surfaces in in-the-wild videos, enabling realistic object-scene interactive simulations.
We propose Diff3DS, a novel differentiable rendering framework for generating view-consistent 3D sketch.
Diff3DS enables end-to-end optimization of 3D sketches via gradients in the 2D image domain, supporting novel tasks like text-to-3D sketch and image-to-3D sketch.