[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: GLPSOL in webassemby faster than native ?
From: |
Domingo Alvarez Duarte |
Subject: |
Re: GLPSOL in webassemby faster than native ? |
Date: |
Sat, 26 Sep 2020 15:51:25 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 |
Hello !
Activating glp_long_double one by one I found that the ones that really
makes hashi.mod with "--cuts" perform better are the ones bellow (but
doing so degrades performance for anything else).
=====
/***********************************************************************
* fhv_h_solve - solve system H * x = b
*
* This routine solves the system H * x = b, where the matrix H is the
* middle factor of the sparse updatable FHV-factorization.
*
* On entry the array x should contain elements of the right-hand side
* vector b in locations x[1], ..., x[n], where n is the order of the
* matrix H. On exit this array will contain elements of the solution
* vector x in the same locations. */
void fhv_h_solve(FHV *fhv, glp_double x[/*1+n*/])
{ SVA *sva = fhv->luf->sva;
int *sv_ind = sva->ind;
glp_double *sv_val = sva->val;
int nfs = fhv->nfs;
int *hh_ind = fhv->hh_ind;
int hh_ref = fhv->hh_ref;
int *hh_ptr = &sva->ptr[hh_ref-1];
int *hh_len = &sva->len[hh_ref-1];
int i, k, end, ptr;
glp_long_double x_i;
for (k = 1; k <= nfs; k++)
{ x_i = x[i = hh_ind[k]];
for (end = (ptr = hh_ptr[k]) + hh_len[k]; ptr < end; ptr++)
x_i -= sv_val[ptr] * x[sv_ind[ptr]];
x[i] = x_i;
}
return;
}
/***********************************************************************
* fhv_ht_solve - solve system H' * x = b
*
* This routine solves the system H' * x = b, where H' is a matrix
* transposed to the matrix H, which is the middle factor of the sparse
* updatable FHV-factorization.
*
* On entry the array x should contain elements of the right-hand side
* vector b in locations x[1], ..., x[n], where n is the order of the
* matrix H. On exit this array will contain elements of the solution
* vector x in the same locations. */
void fhv_ht_solve(FHV *fhv, glp_double x[/*1+n*/])
{ SVA *sva = fhv->luf->sva;
int *sv_ind = sva->ind;
glp_double *sv_val = sva->val;
int nfs = fhv->nfs;
int *hh_ind = fhv->hh_ind;
int hh_ref = fhv->hh_ref;
int *hh_ptr = &sva->ptr[hh_ref-1];
int *hh_len = &sva->len[hh_ref-1];
int k, end, ptr;
glp_long_double x_j;
for (k = nfs; k >= 1; k--)
{ if ((x_j = x[hh_ind[k]]) == 0.0)
continue;
for (end = (ptr = hh_ptr[k]) + hh_len[k]; ptr < end; ptr++)
x[sv_ind[ptr]] -= sv_val[ptr] * x_j;
}
return;
}
=====
glp_double spx_update_d_s(SPXLP *lp, glp_double d[/*1+n-m*/], int p, int q,
const FVS *trow, const FVS *tcol)
{ /* sparse version of spx_update_d */
int m = lp->m;
int n = lp->n;
glp_double *c = lp->c;
int *head = lp->head;
int trow_nnz = trow->nnz;
int *trow_ind = trow->ind;
glp_double *trow_vec = trow->vec;
int tcol_nnz = tcol->nnz;
int *tcol_ind = tcol->ind;
glp_double *tcol_vec = tcol->vec;
int i, j, k;
glp_long_double dq; glp_double e; /*degrade performance*/
xassert(1 <= p && p <= m);
xassert(1 <= q && q <= n);
xassert(trow->n == n-m);
xassert(tcol->n == m);
/* compute d[q] in current basis more accurately */
k = head[m+q]; /* x[k] = xN[q] */
dq = c[k];
for (k = 1; k <= tcol_nnz; k++)
{ i = tcol_ind[k];
dq += tcol_vec[i] * c[head[i]];
}
/* compute relative error in d[q] */
e = fabs(dq - d[q]) / (1.0 + fabs(dq));
/* compute new d[q], which is the reduced cost of xB[p] in the
* adjacent basis */
d[q] = (dq /= tcol_vec[p]);
/* compute new d[j] for all j != q */
for (k = 1; k <= trow_nnz; k++)
{ j = trow_ind[k];
if (j != q)
d[j] -= trow_vec[j] * dq;
}
return e;
}
=====
Cheers !
On 24/9/20 21:18, Michael Hennebry wrote:
On Thu, 24 Sep 2020, Domingo Alvarez Duarte wrote:
I just got glpsol with "long double" working and add binaries for
anyone that want to test then here
https://github.com/mingodad/GLPK/releases
As noted there it'll benefit from tuning the constants in
src/glpk_real.h
Any help/suggestion/comment is welcome !
Note that using long doubles everywhere
would slow down a memory bound computation.
Fetching more data would be required.
One might introduce glp_double_t.
C99 uses double_t for the type in which unnamed
intermediate results of double calculations are stored.
Use glp_double_t for locals that are not arrays,
especially running totals.
Do the casts to ensure that desired calculations
are actually done in glp_double_t.
- Re: GLPSOL in webassemby faster than native ?, (continued)
- Re: GLPSOL in webassemby faster than native ?, Michael Hennebry, 2020/09/22
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/23
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/24
- Re: GLPSOL in webassemby faster than native ?, Michael Hennebry, 2020/09/24
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/25
- Re: GLPSOL in webassemby faster than native ?, Andrew Makhorin, 2020/09/25
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/25
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/26
- Re: GLPSOL in webassemby faster than native ?, Michael Hennebry, 2020/09/27
- Re: GLPSOL in webassemby faster than native ?,
Domingo Alvarez Duarte <=
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/22
- Re: GLPSOL in webassemby faster than native ?, Andrew Makhorin, 2020/09/22
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/22
- Re: GLPSOL in webassemby faster than native ?, Chris Matrakidis, 2020/09/22
- Re: GLPSOL in webassemby faster than native ?, Domingo Alvarez Duarte, 2020/09/25
- Re: GLPSOL in webassemby faster than native ?, Andrew Makhorin, 2020/09/25
- Re: GLPSOL in webassemby faster than native ?, Manuel Muñoz Márquez, 2020/09/26
- Re: GLPSOL in webassemby faster than native ?, Andrew Makhorin, 2020/09/26
- Re: GLPSOL in webassemby faster than native ?, Manuel Muñoz Márquez, 2020/09/27
- Re: GLPSOL in webassemby faster than native ?, Andrew Makhorin, 2020/09/27