Domain Adaptation In Regression
We prove that the discrepancy is a distance for the squared loss when the hypothesis set is the reproducing kernel hilbert space induced by a universal kernel such as the gaussian kernel.
Domain adaptation in regression. A better alternative is to use both the abundant labeled data from a source domain and the limited labeled data from the target domain to train classifiers in a domain adaptation setting. As an example we introduce ssda twin gaussian process regression ssda tgp. Smooth optimization and speciļ¬c characteristics of these sdps in our adaptation case. Many of the current da methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities e g identical conditionals or small distributional discrepancies.
We prove that the discrepancy is a distance for the squared loss when the hypothesis set is the reproducing kernel hilbert space induced by a universal kernel such as the gaussian kernel. The resulting augmented weighted samples can then be used to learn a model of choice alleviating the prob lems of bias in the data. Following the ideas from the non linear iterative partial least squares. We consider the problem of unsupervised domain adaptation da in regression under the assumption of linear hypotheses e g.
Beer lambert s law a task recurrently encountered in analytical chemistry. This paper presents a series of new results for domain adaptation in. We propose such a classifier based on logistic regression and evaluate it for the task of splice site prediction a difficult and essential step in gene prediction. 2 courant institute of mathematical sciences 251 mercer street new york ny 10012.
Altogether our results form a complete solution for domain adaptation in regression including theoretical. Domain adaptation in regression corinna cortes 1and mehryar mohri2 1 google research 76 ninth avenue new york ny 10011. Request pdf domain adaptation in regression this paper presents a series of new results for domain adaptation in the regression setting. We prove that the discrepancy is a distance for the.
We study few shot supervised domain adaptation da for regression problems where only a few labeled target domain data and many labeled source domain data are available. For domain adaptation and estimating weights for the train ing samples based on the ratio of test and train marginals in that space. We consider the problem of unsupervised domain adaptation da in regression under the assumption of linear hypotheses e g. Beer lambert s law a t.
This paper presents a series of new results for domain adaptation in the regression setting.