[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: questions about ichol
From: |
Juan Pablo Carbajal |
Subject: |
Re: questions about ichol |
Date: |
Sun, 8 May 2016 21:42:59 +0200 |
On Sun, May 8, 2016 at 9:38 PM, Juan Pablo Carbajal
<address@hidden> wrote:
> On Sun, May 8, 2016 at 7:37 PM, siko1056 <address@hidden> wrote:
>> Hi Juan,
>>
>> In Germany we enjoyed a long sunny Holiday weekend, thus sorry for my
>> delayed reply.
>>
>>
>> Juan Pablo Carbajal-2 wrote
>>> 1. Is there a reason for it to work only with sparse matrices?
>>
>> Yes, the implemented algorithm is according to the book by Yousef Saad
>> http://www-users.cs.umn.edu/~saad/books.html (Chap. 10) with some
>> modifications described in Eduardo Fernández GSoC blog
>> http://edu159-gsoc2014.blogspot.de/ . And they really only make sense with
>> sparse matrices and large dimensions. For each other case, a dense
>> Cholesky-Factorization is more economical.
>>
>>
>> Juan Pablo Carbajal-2 wrote
>>> 2. I can get it to work with the "ict" option and "droptol" > 1e-10. I
>>> always get
>>> error: ichol: negative pivot encountered
>>> error: called from
>>> ichol at line 242 column 9
>>> (I guess the actual value of droptol might be different for different
>>> matrices)
>>>
>>> For example, the following fails. the matrix is clearly positive
>>> definite (algouth some eigvals might be really small, but even if I
>>> regularize...)
>>>
>>> t = linspace (0,1,1e3).';
>>> k = sparse (exp (-(t-t.').^2/0.1^2));
>>> Ui = ichol(k,struct("type","ict","droptol",1e-9,"shape","upper"));
>>>
>>> Ui =
>>> ichol(k+1e-6*eye(size(k)),struct("type","ict","droptol",1e-9,"shape","upper"));
>>> # remove small eigs
>>
>> Here I have to disagree. Your matrix is not positive definite, see for
>> example
>> https://en.wikipedia.org/wiki/Positive-definite_matrix#Characterizations (4.
>> Its leading principal minors are all positive.)
>>
>>>> t = linspace (0,1,1e3).';
>>>> k = sparse (exp (-(repmat (t,1,1e3) - repmat (t.',1e3,1)).^2/0.1^2));
>>>> cond(k)
>> ans = 1.5316e+21
>>
>>>> full(k(1:5,1:5))
>> ans =
>>
>> 1.00000 0.99990 0.99960 0.99910 0.99840
>> 0.99990 1.00000 0.99990 0.99960 0.99910
>> 0.99960 0.99990 1.00000 0.99990 0.99960
>> 0.99910 0.99960 0.99990 1.00000 0.99990
>> 0.99840 0.99910 0.99960 0.99990 1.00000
>>
>>>> eig(k(1:5,1:5))
>> ans =
>>
>> 2.6878e-16
>> 1.9309e-11
>> 2.8099e-07
>> 2.0026e-03
>> 4.9980e+00
>>
>> Your matrix is symmetric, strictly positive, no value is smaller or equal to
>> zero, but it is not diagonal dominant (enough) for positive
>> semidefiniteness. A principal minor check quickly reveals, that there are
>> negative eigenvalues, and the matrix condition estimation is not very
>> promising.
>>
>> The same with your Tikhonov regularization.
>>
>>>> k = k + 1e-6*speye (size (k));
>>>> cond(k)
>> ans = 1.7339e+08
>>
>>>> full(k(1:5,1:5))
>> ans =
>>
>> 1.00000 0.99990 0.99960 0.99910 0.99840
>> 0.99990 1.00000 0.99990 0.99960 0.99910
>> 0.99960 0.99990 1.00000 0.99990 0.99960
>> 0.99910 0.99960 0.99990 1.00000 0.99990
>> 0.99840 0.99910 0.99960 0.99990 1.00000
>>
>>>> eig(k(1:5,1:5))
>> ans =
>>
>> 1.0000e-06
>> 1.0000e-06
>> 1.2810e-06
>> 2.0036e-03
>> 4.9980e+00
>>
>> Better but not "enough".
>>
>> Maybe you tell me what you are going to archive with your computation, to
>> find a better approach, but I doubt ichol was the right tool for archiving
>> it.
>>
>> HTH, Kai
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://octave.1599824.n4.nabble.com/questions-about-ichol-tp4676730p4676836.html
>> Sent from the Octave - General mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> Help-octave mailing list
>> address@hidden
>> https://lists.gnu.org/mailman/listinfo/help-octave
>
> Hi Kai,
>
> The squared exponential function is one of the archetypes positive
> definite kernels
> https://en.wikipedia.org/wiki/Covariance_function#Parametric_families_of_covariance_functions
> So maybe your are working with a different definition of positive
> definite or the method you are using just can handle small
> eigenvalues, which was the point I was trying to rise.
> The incomplete Cholesky factorization has been used for positive
> kernels and that is how I ended in your algorithm[1], but it seems the
> implementation is not general enough.
>
>
> [1] Fine, S. and Scheinberg, K. (2002). Efficient SVM Training Using
> Low-Rank Kernel Representations. Journal of Machine Learning Research,
> 2(2):243–264.
Will se eif I can steal some time to test the algorithm in here
http://web.mit.edu/ehliu/Public/sclark/Golub%20G.H.,%20Van%20Loan%20C.F.-%20Matrix%20Computations.pdf
section 10.3