= Package QltvRespModel = Max-likelihood and bayesian estimation of qualitative response models. == Weighted Boolean Regresions == Abstract class [source:/tolp/OfficialTolArchiveNetwork/QltvRespModel/WgtBoolReg.tol @WgtBoolReg] is the base to inherit weighted boolean regressions as logit or probit or any other, given just the scalar distribution function [[LatexEquation( F )]] and the corresponding density function [[LatexEquation( f )]]. In a weighted regression each row of input data has a distinct weight in the likelihood function. For example, it can be very usefull to handle with data extrated from an stratified sample. This class implements max-likelihood by means of package [wiki:OfficialTolArchiveNetworkNonLinGloOpt NonLinGloOpt] and bayesian estimation using [wiki:OfficialTolArchiveNetworkBysSampler BysSampler]. Let be * [[LatexEquation( X\in\mathbb{R}^{m\times n} )]] the regression input matrix * [[LatexEquation( w\in\mathbb{R}^{m} )]] the vector of weights of each register * [[LatexEquation( y\in\mathbb{R}^{m} )]] the regression output matrix The hypotesis is that [[LatexEquation( \forall i=1 \dots m )]] [[LatexEquation( y_{i}\sim Bernoulli\left(\pi_{i}\right) )]] [[LatexEquation( \pi_{i}=Pr\left[y_{i}=1\right] = F\left(X_{i}\beta\right) )]] The likelihood function is then [[LatexEquation( lk\left(\beta\right)=\underset{i}{\prod}\pi_{i}^{w_{i}y_{i}}\left(1-\pi_{i}\right)^{w_{i}\left(1-y_{i}\right)} )]] and its logarithm [[LatexEquation( L\left(\beta\right)=\ln\left(lk\left(\beta\right)\right)=\underset{i}{\sum}w_{i}\left(y_{i}\ln\left(\pi_{i}\right)+\left(1-y_{i}\right)\ln\left(1-\pi_{i}\right)\right) )]] The gradient of the logarithm of the likelihood function will be [[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{j}}=\underset{i}{\sum}w_{i}\left(y_{i}\frac{f\left(x_{i}\beta\right)}{F\left(x_{i}\beta\right)}-\left(1-y_{i}\right)\frac{f\left(x_{i}\beta\right)}{1-F\left(x_{i}\beta\right)}\right)x_{ij} )]] and the hessian is [[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{i}\partial_{j}}=\underset{k}{\sum}w_{k}\left(y_{k}\frac{f'\left(x_{k}\beta\right)F\left(x_{k}\beta\right)-f^{2}\left(x_{k}\beta\right)}{F^{2}\left(x_{k}\beta\right)}-\left(1-y_{k}\right)\frac{f'\left(x_{k}\beta\right)\left(1-F\left(x_{k}\beta\right)\right)+f^{2}\left(x_{k}\beta\right)}{\left(1-F\left(x_{k}\beta\right)\right)^{2}}\right)x_{ik}x_{jk} )]] User can and should define scalar truncated normal or uniform prior information and bounds for all variables for which he/she has robust knowledge.[[BR]] [[BR]] [[LatexEquation( \beta_k \sim N\left(\nu_k, \sigma_k \right) )]] [[BR]] [[BR]] [[LatexEquation( l_k \le \beta_k \le u_k \wedge l_k < u_k)]] [[BR]] [[BR]] When [[LatexEquation( \sigma_k )]] is infinite or unknown we will express a uniform prior. When [[LatexEquation( l_k = -\infty)]] or unknown we will express that variable has no lower bound. When [[LatexEquation( u_k = +\infty)]] or unknown we will express that variable has no upper bound. It's also allowed to give any set of constraining linear inequations if they are compatible with lower and upper bounds [[BR]] [[BR]] [[LatexEquation( A \beta \le a )]] [[BR]] [[BR]] === Weighted Logit Regression === Class [source:/tolp/OfficialTolArchiveNetwork/QltvRespModel/WgtLogit.tol @WgtLogit] is an specialization of class [source:/tolp/OfficialTolArchiveNetwork/QltvRespModel/WgtBoolReg.tol @WgtBoolReg] that handles with weighted logit regressions. In this case we have that scalar distribution is the logistic one. [[LatexEquation( F\left(z\right) = \frac{1}{1+e^{-z}} )]] [[BR]] [[BR]] [[LatexEquation( f\left(z\right) = \frac{e^{-z}}{\left(1+e^{-z}\right)^2} )]] [[BR]] [[BR]] [[LatexEquation( f'\left(z\right) = - f\left(z\right) F\left(z\right) {\left(1-e^{-z}\right)} )]] [[BR]] [[BR]] [[LatexEquation( L\left(\beta\right)=\underset{i}{\sum}w_{i}\left(y_{i}x_{i}^{t}\beta-\ln\left(1+e^{x_{i}^{t}\beta}\right)\right) )]] [[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{j}}=\underset{i}{\sum}w_{i}\left(y_{i}x_{ij}-x_{ij}\left(\frac{e^{x_{i}^{t}\beta}}{1+e^{x_{i}^{t}\beta}}\right)\right)=\underset{i}{\sum}w_{i}x_{ij}\left(y_{i}-\left(\frac{e^{x_{i}^{t}\beta}}{1+e^{x_{i}^{t}\beta}}\right)\right) )]] [[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{i}\partial_{j}}=-\underset{k}{\sum}w_{k}\frac{\left(1+e^{x_{k}^{t}\beta}\right)e^{x_{k}^{t}\beta}x_{ki}x_{kj}-\left(e^{x_{k}^{t}\beta}\right)^{2}x_{ki}x_{kj}}{\left(1+e^{x_{k}^{t}\beta}\right)}=-\underset{k}{\sum}x_{ki}x_{kj}w_{k}\pi_{i}\left(1-\pi_{i}\right) )]] === Weighted Probit Regression === Class [source:/tolp/OfficialTolArchiveNetwork/QltvRespModel/WgtProbit.tol @WgtProbit] is an specialization of class [source:/tolp/OfficialTolArchiveNetwork/QltvRespModel/WgtBoolReg.tol @WgtBoolReg] that handles with weighted probit regressions. In this case we have that scalar distribution is the standard normal one. [[LatexEquation( F\left(z\right) = F_{0,1}\left(z\right) )]] [[BR]] [[BR]] [[LatexEquation( f\left(z\right) = f_{0,1}\left(z\right) )]] [[BR]] [[BR]] [[LatexEquation( f'\left(z\right) = -z f_{0,1}\left(z\right) )]] [[BR]] [[BR]] [[LatexEquation( L\left(\beta\right)=\underset{i}{\sum}w_{i}\left(y_{i}x_{i}^{t}\beta-\ln\left(1+e^{x_{i}^{t}\beta}\right)\right) )]] [[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{j}}=\underset{i}{\sum}w_{i}\left(y_{i}\frac{f_{0,1}\left(x_{i}\beta\right)}{F_{0,1}\left(x_{i}\beta\right)}-\left(1-y_{i}\right)\frac{f_{0,1}\left(x_{i}\beta\right)}{1-F_{0,1}\left(x_{i}\beta\right)}\right)x_{ij} )]] [[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{i}\partial_{j}}=-\underset{k}{\sum}w_{k}f_{0,1}\left(x_{k}\beta\right)\left(y_{k}\frac{zF_{0,1}\left(x_{k}\beta\right)+f_{0,1}\left(x_{k}\beta\right)}{F_{0,1}^{2}\left(x_{k}\beta\right)}+\left(1-y_{k}\right)\frac{-z\left(1-F_{0,1}\left(x_{k}\beta\right)\right)+f_{0,1}\left(x_{k}\beta\right)}{\left(1-F_{0,1}\left(x_{k}\beta\right)\right)^{2}}\right)x_{ik}x_{jk} )]]