gnunet-developers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-developers] migration callback bug !! (fixed)


From: eric haumant
Subject: [GNUnet-developers] migration callback bug !! (fixed)
Date: Fri, 11 Apr 2003 11:52:26 +0200

Hello,

Like I said in the last message, I've worked on the migration callback to
make it always retrieve content.

Here is the new code. It searches a list of existing files with a minus
priority and then select randomly a file from that list to get randomly a
hashcode in the file.

My code is based on the scanDirectory function of util/storage.

Here is the code : (I've added two functions that first get a list of files,
and second get a file from the list)

/**
 * Return the number of files that contains data with priority
 * higher than given.
 *
 * @param file the filename of the current file
 * @param dir the directory name
 * @param nb two number that represents the min priority to select 
 * file and the second the number of files selected
 **/
void countFile(char *file, char *dir, int (*nb)[2])
{
  int filenum;
  if ((filenum=atoi(file))<0) //if i can get the filename
    return;

  if (filenum>=((*nb)[0])) //if the filename is greater than restricted,
count
        ((*nb)[1])++;
}

/**
 * Return the nth file from the list of selected files
 *
 * @param file the filename of the current file
 * @param dir the directory name
 * @param nb three numbers that represents the min priority to select 
 * file , the number of files passed and the name of the selected file.
 **/
void getRandomFileName(char *file, char *dir, int (*nb)[3])
{
  int filenum;
  if ((filenum=atoi(file))<0) //if i can get the filename
    return;

  if (filenum<((*nb)[0])) //if the filename is lower than restricted, cancel
    return;
  if ((*nb)[1]==0)
        (*nb)[2]=filenum;
  ((*nb)[1])--;
}

/**
 * Return a random key from the database (just the key, not the
 * content!).  Note that the selection is not strictly random but
 * strongly biased towards content of a low priority (which we are
 * likely to discard soon).
 *
 * @param ce output information about the key
 * @return SYSERR on error, OK if ok.
 **/
int getRandomContent(HighDBHandle handle,
                     ContentIndex * ce) {
  DatabaseHandle * dbf = handle;
  HashCode160 query;
  HashCode160 * result;
  void * vresult;
  unsigned int rprio;
  int finiteLoop;
  int cnt;
  int res=-1;
  
  finiteLoop = 0; /* at most 1000 iterations! */
  /* FIXME: we may also want to take the
     default content-priority as an alternative
     base-value (or do some more exhaustive search) */
  while ( (res == -1) &&
          (finiteLoop < 100000) ) {
    int count[2];
    int retrievefile[3];
    int nbfile=0;
    finiteLoop += 100;
    rprio = dbf->minPriority + randomi(finiteLoop);
    result = NULL;
    //count the number of file name with a name greater than rprio
    count[0]=rprio;
    count[1]=0;

    if
((nbfile=scanDirectory(dbf->pIdx,(DirectoryEntryCallback)&countFile,&count))
==0) continue;
    if (count[1]==0) break;

    //get a file with a name greater than rprio randomly in the number of
file greater than rprio
    retrievefile[0]=rprio;
    retrievefile[1]=randomi(count[1]);
    retrievefile[2]=0;

    if
((nbfile=scanDirectory(dbf->pIdx,(DirectoryEntryCallback)&getRandomFileName,
&retrievefile))==0) continue;
    
    res = pidxReadContent(dbf->pIdx,
                          retrievefile[2], // rprio,
                          &result);

    if (res == -1)
      continue;
    cnt = res / sizeof(HashCode160);
    if (cnt == 0) 
      continue; /* something wrong */
    cnt = randomi(cnt);
    memcpy(&query,
           &result[cnt],
           sizeof(HashCode160));
    FREE(result);
  }
  if (res == -1)
    return SYSERR;

  /* now get the ContentIndex */
  vresult = NULL;
  res = readContent(handle,
                    &query,
                    ce,
                    &vresult,
                    0);
  if (res == -1)
    return SYSERR;
  FREENONNULL(vresult);
  return OK;
}

This is just a try, I'm going to test it this big files and some load to see
how it works and if it not use too much cpu.

Eric.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]