FixBi: bridging domain spaces for unsupervised domain adaptation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

FixBi : bridging domain spaces for unsupervised domain adaptation. / Na, Jaemin; Jung, Heechul; Chang, Hyung Jin; Hwang, Wonjun.

2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. (Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Na, J, Jung, H, Chang, HJ & Hwang, W 2021, FixBi: bridging domain spaces for unsupervised domain adaptation. in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition., IEEE, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, United States, 21/06/21.

APA

Na, J., Jung, H., Chang, H. J., & Hwang, W. (Accepted/In press). FixBi: bridging domain spaces for unsupervised domain adaptation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.). IEEE.

Vancouver

Na J, Jung H, Chang HJ, Hwang W. FixBi: bridging domain spaces for unsupervised domain adaptation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. 2021. (Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.).

Author

Na, Jaemin ; Jung, Heechul ; Chang, Hyung Jin ; Hwang, Wonjun. / FixBi : bridging domain spaces for unsupervised domain adaptation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. (Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.).

Bibtex

@inproceedings{a380e1afac824e2a826a2dd4c8405669,
title = "FixBi: bridging domain spaces for unsupervised domain adaptation",
abstract = "Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain. From the augmented-domains, we train the source-dominant model and the target-dominant model that have complementary characteristics. Using our confidence based learning methodologies, e.g., bidirectional matching with high-confidence predictions and self-penalization using low-confidence predictions, the models can learn from each other or from its own results. Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain. Extensive experiments demonstrate the superiority of our proposed method on three public benchmarks: Office-31, Office-Home, and VisDA-2017.",
author = "Jaemin Na and Heechul Jung and Chang, {Hyung Jin} and Wonjun Hwang",
note = "Not yet published as of 08/06/2021.; 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 ; Conference date: 21-06-2021 Through 24-06-2021",
year = "2021",
month = mar,
day = "1",
language = "English",
series = "Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.",
publisher = "IEEE",
booktitle = "2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",

}

RIS

TY - GEN

T1 - FixBi

T2 - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition

AU - Na, Jaemin

AU - Jung, Heechul

AU - Chang, Hyung Jin

AU - Hwang, Wonjun

N1 - Not yet published as of 08/06/2021.

PY - 2021/3/1

Y1 - 2021/3/1

N2 - Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain. From the augmented-domains, we train the source-dominant model and the target-dominant model that have complementary characteristics. Using our confidence based learning methodologies, e.g., bidirectional matching with high-confidence predictions and self-penalization using low-confidence predictions, the models can learn from each other or from its own results. Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain. Extensive experiments demonstrate the superiority of our proposed method on three public benchmarks: Office-31, Office-Home, and VisDA-2017.

AB - Unsupervised domain adaptation (UDA) methods for learning domain invariant representations have achieved remarkable progress. However, most of the studies were based on direct adaptation from the source domain to the target domain and have suffered from large domain discrepancies. In this paper, we propose a UDA method that effectively handles such large domain discrepancies. We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain. From the augmented-domains, we train the source-dominant model and the target-dominant model that have complementary characteristics. Using our confidence based learning methodologies, e.g., bidirectional matching with high-confidence predictions and self-penalization using low-confidence predictions, the models can learn from each other or from its own results. Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain. Extensive experiments demonstrate the superiority of our proposed method on three public benchmarks: Office-31, Office-Home, and VisDA-2017.

UR - https://ieeexplore.ieee.org/xpl/conhome/1000147/all-proceedings

M3 - Conference contribution

T3 - Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

BT - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

PB - IEEE

Y2 - 21 June 2021 through 24 June 2021

ER -