[Dev] [Libretools - Bug #314] (open) have per-user staging dirs to support concurrent librerelease runs

Nicolás Reynolds fauno at kiwwwi.com.ar
Sun Apr 7 17:52:12 GMT 2013


Michał Masłowski <mtjm at mtjm.eu> writes:

>> so the problem is that alice has to upload 3 big packages, connection is
>> lost after 1.5 packages being uploaded, bob releases the first one
>> (db-update ignores the temp file) but alice doesn't notice because the
>> failed sync doesn't know it has to skip the first (it's not there
>> anymore so rsync uploads it) and continue from the second.  the result
>> is alice uploading 3 packages again (if connection isn't lost) and the
>> first package being released twice, no?
>
> It is; instead of releasing twice, the second db-update will error.
>
>> i think it would be simpler to lock the stage area until all uploads are
>> done.
>
> One of librerelease runs that I did for the new KDE version involved 90
> packages, 250 MiB of data and would take more than two hours if there
> were no network problems.  Another user doing a similarly sized upload
> would wait four hours.
>
>> this doesn't avoid the problem of packages being built by two or
>> more packagers, which is solved by coordination (!) and librerelease
>> checking if a package was already released[0].  this would solve Alice
>> having to reupload a package and Bob missing Alice saying "i'll do
>> package X".
>
> This is mostly solved by removing staged packages when db-update
> considers them already released, except for some rare cases when fullpkg
> makes incorrect build ordering due to bad metadata (e.g. ruby and
> graphviz).
>
>> [0]: the quickest way i can think of is checking if repo returns http
>>      code 200 by requesting the headers for the file:
>>
>>      curl --head \
>>      http://repo.parabolagnulinux.org/libre/os/i686/linux-libre-3.8.5-1-i686.pkg.tar.xz
>> \
>>      | head -n1 | cut -d" " -f2
>
> It would solve only one problem and need 90 HTTP requests, maybe there
> are simple general solutions without too big performance issues.

they're just HEAD requests, i don't think it's such an overhead to our
server.  checking if they're already publicly available will avoid you
having to reupload 90 packages, db-update's cleanup ignoring rsync
tmpfiles will solve the other (i thought it was doing this already?)

i can think of at least three more ways of doing it, requiring more
bytes being transmitted.

* librerelease runs -Sy and checks against the local database

* generate an include-list for rsync and do a dry run between
  repo:pool/packages/ and local:staging/ parsing rsync output afterwards

* similarly download the repo index.html (i think extra/index.html can
  be up to 1M)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 489 bytes
Desc: not available
URL: <https://lists.parabola.nu/pipermail/dev/attachments/20130407/cfa32d97/attachment.sig>


More information about the Dev mailing list