Fengyu Yang
My name is Fengyu Yang, a first-year PhD student in Computer Science at Yale University, advised by Prof. Alex Wong. My research interest lies in computer vision and multimodal learning, especially in vision and touch.
Previouly, I was an undergraduate student at University of Michigan, advised by Prof. Andrew Owens. I was also fortunate to work with Prof. Wenzhen Yuan, Prof. Xi Li and Prof. Zhongming Liu during my undergraduate study.
I am actively looking for internships in Summer 2024, feel free to contact me if you are interested or for any form of collaborations.
Email  / 
Github
|
|
News
2023/09: I am joining Yale as a PhD student in Computer Science.
2023/07: "Generating Visual Scenes from Touch" accepted by ICCV 2023.
2023/02: "Boosting Detection in Crowd Analysis via Underutilized Output Features" accepted by CVPR 2023.
2022/12: Selected as the Runner Up of the CRA Outstanding Undergraduate Researcher Award.
2022/09: "Touch and Go: Learning from Human-Collected Vision and Touch" accepted by NeurIPS 2022.
2022/07: "RBC: Rectifying the Biased Context in Continual Semantic Segmentation" accepted by ECCV 2022.
2022/03: My first paper "Sparse and Complete Latent Organization for Geospatial Semantic Segmentation" accepted by CVPR 2022.
Publications
|
Binding Touch to Everything: Learning Unified Multimodal Tactile Representations
Fengyu Yang*,
Chao Feng*,
Ziyang Chen*,
Hyoungseob Park,
Daniel Wang,
Yiming Dou,
Ziyao Zeng,
Xien Chen,
Rit Gangopadhyay,
Andrew Owens,
Alex Wong
In submission
We introduce UniTouch, a unified tactile representation for vision-based tactile sensors aligned with multiple modalities. We show we can now use powerful models trained on other modalities (e.g. CLIP, LLM) to conduct tactile sensing tasks zero shot.
|
|
Tactile-Augmented Radiance Fields
Yiming Dou,
Fengyu Yang,
Yi Liu,
Antonio Loquercio,
Andrew Owens
In submission
We present TaRF, a scene representation that brings vision and touch into a shared 3D space.
|
|
Generating Visual Scenes from Touch
Fengyu Yang,
Jiacheng Zhang,
Andrew Owens
ICCV, 2023
project page /
paper /
code
We propose a unified approach to various touch-related image generation tasks via diffusion model. We propose a novel task of tactile-driven shading estimation and apply our model to existing tasks of visuo-tactile cross generation, and tactile-driven image stylization.
|
|
Boosting Detection in Crowd Analysis via Underutilized Output Features
Shaokai Wu*,
Fengyu Yang*
CVPR, 2023
project page /
paper /
code
We first consider detection outputs as valuable features for crowd analysis and propose a plug-and-play module Crowd Hat able to boost various detection-based methods.
|
|
RBC: Rectifying the Biased Context in
Continual Semantic Segmentation
Hanbin Zhao*,
Fengyu Yang*,
Xinghe Fu,
Xi Li
ECCV, 2022
paper /
code
We first consider the biased context in continue semantic segmentation (CSS) and propose a context-rectified image-duplet learning scheme and a biased-context-insensitive consistency loss to tackle CSS problem.
|
|
Sparse and Complete Latent Organization for Geospatial Semantic Segmentation
Fengyu Yang*,
Chenyang Ma*
CVPR, 2022
paper
We propose a prototypical contrastive learning method using both foreground and background categories to tackle the large intra-class variance in geospatial semantic segmentation.
|
Honors and Awards
- CRA Outstanding Undergraduate Researcher Award (Runner Up), Computing Research Association. December 2022.
- Wang Chu Chien-Wen Research Award, University of Michigan. April 2022.
- Henry Ford II Prize, University of Michigan. March 2022.
- EECS Scholar, University of Michigan. 2021-2022.
- James B. Angell Scholar, University of Michigan. 2021-2022.
- Dean's List, University of Michigan. 2019-2022.
- University Honors, University of Michigan. 2019-2022.
Academic Service
|