close Warning: Can't synchronize with repository "(default)" (/var/svn/tolp does not appear to be a Subversion repository.). Look in the Trac log for more information.

Changes between Version 4 and Version 5 of OfficialTolArchiveNetworkBysPrior


Ignore:
Timestamp:
Dec 25, 2010, 8:14:19 PM (14 years ago)
Author:
Víctor de Buen Remiro
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • OfficialTolArchiveNetworkBysPrior

    v4 v5  
    7575[[LatexEquation( D_{k}^{3}\left(\beta\right)=\begin{cases} 0 & \forall d_{k}\left(\beta\right)\leq0\\ d_{k}^{3}\left(\beta\right) & \forall d_{k}\left(\beta\right)>0\end{cases}  )]]
    7676
    77 is a continuous and differentiable in [[LatexEquation( \mathbb{R}^{n} )]]
     77is continuous and differentiable in [[LatexEquation( \mathbb{R}^{n} )]]
    7878
    7979[[LatexEquation( \frac{\partial D_{k}^{3}\left(\beta\right)}{\partial\beta_{i}}=\begin{cases} 0 & \forall d_{k}\left(\beta\right)\leq0\\ 3d_{k}^{2}\left(\beta\right)A_{ki} & \forall d_{k}\left(\beta\right)>0\end{cases}  )]]
    8080
    81 The feasibility condition can then be defined as a single continuous nonlinear
    82 inequality and differentiable everywhere
     81The feasibility condition can be defined as a single nonlinear
     82inequality continuous and differentiable everywhere
    8383
    8484[[LatexEquation( g\left(\beta\right)=\underset{k=1}{\overset{r}{\sum}}D_{k}^{3}\left(\beta\right)\leq0 )]]
     
    9090== Multinormal prior ==
    9191
     92When we know that a single variable should fall symmetrically close to a known value
     93we can express telling that it have a normal distribution with average in these value.
     94This type of prior knowledge can be extended to higher dimensions by the multinormal
     95distribution
    9296
     97 [[LatexEquation(  \beta\sim N\left(\mu,\Sigma\right) )]]
     98
     99which likelihood function is
     100
     101 [[LatexEquation(  lk\left(\beta\right)=\frac{1}{\left(2\pi\right)^{n}\left|\Sigma\right|^{\frac{1}{2}}}e^{^{-\frac{1}{2}\left(\beta-\mu\right)^{T}\Sigma^{-1}\left(\beta-\mu\right)}} )]]
     102
     103The log-likelihood is
     104 
     105 [[LatexEquation( L\left(\beta\right)=\ln\left(lk\left(\beta\right)\right)=-\frac{n}{2}\ln\left(2\pi\right)-\frac{1}{2}\ln\left(\left|\Sigma\right|\right)-\frac{1}{2}\left(\beta-\mu\right)^{T}\Sigma^{-1}\left(\beta-\mu\right) )]]
     106 
     107The gradient is
     108
     109 [[LatexEquation( \left(\frac{\partial L\left(\beta\right)}{\partial\beta_{i}}\right)_{i=1\ldots n}=-\Sigma^{-1}\left(\beta-\mu\right) )]]
     110
     111and the hessian
     112
     113 [[LatexEquation( \left(\frac{\partial^{2}L\left(\beta\right)}{\partial\beta_{i}\partial\beta_{j}}\right)_{i,j=1\ldots n}=-\Sigma^{-1} )]]
     114 
     115 
    93116== Inverse chi-square prior ==
    94117
    95118
     119== Transformed prior ==
     120
     121Sometimes we have an information prior that has a simple distribution over a
     122transformation of original variables. For example, if we know that a set of
     123variables has a normal distribution with average equal to another variable,
     124as in the case of latent variables in hierarquical models
     125
     126[[LatexEquation( \beta_{i}\sim N\left(\beta_{1},\sigma\right)\forall i=2\ldots n )]]
     127
     128Then we can define a variable transformation like this
     129
     130[[LatexEquation( \gamma \left(\beta\right)=\left(\begin{array}{c} \beta_{2}-\beta_{1}\\ \vdots\\ \beta_{n}-\beta_{1}\end{array}\right)\in\mathbb{R}^{n-1} )]]
     131
     132and define the simple normal prior
     133
     134[[LatexEquation( \gamma\sim N\left(0,\sigma^{2}I\right) )]]
     135
     136Then the log-likelihood of original prior will be calculated from the
     137transformed one as
     138
     139[[LatexEquation( L\left(\beta\right)=L^{*}\left(\gamma\left(\beta\right)\right) )]]
     140
     141If we know the first and second derivatives of the transformation
     142
     143[[LatexEquation( \frac{\partial\gamma_{k}}{\partial\beta_{i}}  )]]
     144
     145[[LatexEquation( \frac{\partial^{2}\gamma_{k}}{\partial\beta_{i}\partial\beta_{j}}  )]]
     146
     147the we can calculate the original gradient and the hessian after the gradient
     148and the hessian of the transformed prior as following
     149
     150[[LatexEquation( \frac{\partial L\left(\beta\right)}{\partial\beta_{i}}=\underset{k=1}{\overset{K}{\sum}}\frac{\partial L^{*}\left(\gamma\right)}{\partial\gamma_{k}}\frac{\partial\gamma_{k}}{\partial\beta_{i}}  )]]
     151
     152[[LatexEquation( \frac{\partial L^{2}\left(\beta\right)}{\partial\beta_{i}\partial\beta_{j}}=\underset{k=1}{\overset{K}{\sum}}\left(\frac{\partial^{2}L^{*}\left(\gamma\right)}{\partial\gamma_{k}\partial\beta_{j}}\frac{\partial\gamma_{k}}{\partial\beta_{i}}+\frac{\partial L^{*}\left(\gamma\right)}{\partial\gamma_{k}}\frac{\partial^{2}\gamma_{k}}{\partial\beta_{i}\partial\beta_{j}}\right)=\underset{k=1}{\overset{K}{\sum}}\left(\frac{\partial^{2}L^{*}\left(\gamma\right)}{\partial\gamma_{k}\partial\gamma_{k}}\frac{\partial\gamma_{k}}{\partial\beta_{i}}\frac{\partial\gamma_{k}}{\partial\beta_{j}}+\frac{\partial L^{*}\left(\gamma\right)}{\partial\gamma_{k}}\frac{\partial^{2}\gamma_{k}}{\partial\beta_{i}\partial\beta_{j}}\right)  )]]
    96153
    97154
     155