[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Optimisation based on index of data
From: |
Olaf Till |
Subject: |
Re: Optimisation based on index of data |
Date: |
Wed, 20 Jan 2016 16:29:59 +0100 |
User-agent: |
Mutt/1.5.23 (2014-03-12) |
On Wed, Jan 20, 2016 at 03:18:01AM -0800, inor0627 wrote:
> Hello list,
>
> I have some 1D measuring data and try to find the subset with best
> linearity. The following approach
>
> lin_error = @(start_idx) range( detrend(
> data(round(start_idx):round(start_idx)+interesting_length) ) );
> start_opt = round(fminsearch(lin_error,guess));
>
> works fine for some measurements, but in most cases fminsearch violates the
> bounds of 'data' ('index out of bound') as the optimal subset is near the
> end of 'data'.
>
> So there are 2 questions:
> Is fminsearch the right choice for this integer-type optimisation? I have a
> bad feeling due to the resulting discontinuities in 'lin_error' when
> rounding 'start_idx'. A google search for 'octave integer optimization'
> didn't answer this question to me.
> If fminsearch is ok for this task, how can I handle the bounds of 'data'?
Hi Ingo,
AFAIK Octave has no function for integer optimization ...
Can you use a brute force approach, checking each index?
If there is indeed some larger distance from your mininimum within
which 'lin_error' does not change monotony, and you're sure your
starting guess is within this distance, you could try to cope with
out-of-bound start indices by letting 'lin_error' return some large
value for them.
No guaranties ...
Olaf
--
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net
signature.asc
Description: Digital signature