1. 02 Jan, 2022 1 commit
  2. 20 Nov, 2021 1 commit
    • Morgan Adamiec's avatar
      Update mailing list url · b0a2fd75
      Morgan Adamiec authored
      change pacman-dev@archlinux.org to pacmandev@lists.archlinux.org
      
      Most of this is copyright notices but this also fixes FS#72129 by
      updating the address in docs/index.asciidoc.
      b0a2fd75
  3. 04 Sep, 2021 1 commit
    • Morgan Adamiec's avatar
      libalpm: Give -U downloads a random .part name if needed · c0026caa
      Morgan Adamiec authored and Allan McRae's avatar Allan McRae committed
      
      
      archweb's download links all ended in /download. This cause all the temp
      files to be named download.part. With parallel downloads this results in
      multiple downloads to go to the same temp file and breaks the transaction.
      
      Assign random temporary filenames to downloads from URLs that are either
      missing a filename, or if the filename does not contain at least three
      hyphens (as a well formed package filename does).
      
      While this approach to determining when to use a temporary filename is
      not 100% foolproof, it does keep nice looking download progress bar names
      when a proper package filename is given. The only downside of not using
      temporary files when provided with a filename  with three or more hyphens
      is URLs created specifically to bypass temporary filename usage can not
      be downloaded in parallel. We probably do not want to download packages
      from such URLs anyway.
      
      Fixes FS#71464
      
      Modified-by: Allan McRae (do not use temporary files for realish URLs)
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      c0026caa
  4. 01 Mar, 2021 1 commit
  5. 07 Jul, 2020 1 commit
    • Anatol Pomozov's avatar
      Move signature payload creation to download engine · f078c2d3
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      Until now callee of ALPM download functionality has been in charge of
      payload creation both for the main file (e.g. *.pkg) and for the accompanied
      *.sig file. One advantage of such solution is that all payloads are
      independent and can be fetched in parallel thus exploiting the maximum
      level of download parallelism.
      
      To build *.sig file url we've been using a simple string concatenation:
      $requested_url + ".sig". Unfortunately there are cases when it does not
      work. For example an archlinux.org "Download From Mirror" link looks like
      this https://www.archlinux.org/packages/core/x86_64/bash/download/
      
       and
      it gets redirected to some mirror. But if we append ".sig" to the end of
      the link url and try to download it then archlinux.org returns 404 error.
      
      To overcome this issue we need to follow redirects for the main payload
      first, find the final url and only then append '.sig' suffix.
      This implies 2 things:
       - the signature payload initialization need to be moved to dload.c
       as it is the place where we have access to the resolved url
       - *.sig is downloaded serially with the main payload and this reduces
       level of parallelism
      
      Move *.sig payload creation to dload.c. Once the main payload is fetched
      successfully we check if the callee asked to download the accompanied
      signature. If yes - create a new payload and add it to mcurl.
      
      *.sig payload does not use server list of the main payload and thus does
      not support mirror failover. *.sig file comes from the same server as
      the main payload.
      
      Refactor event loop in curl_multi_download_internal() a bit. Instead of
      relying on curl_multi_check_finished_download() to return number of new
      payloads we simply rerun the loop iteration one more time to check if
      there are any active downloads left.
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      f078c2d3
  6. 26 Jun, 2020 2 commits
    • Anatol Pomozov's avatar
      Cleanup the old sequential download code · 84723cab
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      
      
      All users of _alpm_download() have been refactored to the new API.
      It is time to remove the old _alpm_download() functionality now.
      
      This change also removes obsolete SIGPIPE signal handler functionality
      (this is a leftover from libfetch days).
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      84723cab
    • Anatol Pomozov's avatar
      Convert '-U pkg1 pkg2' codepath to parallel download · 16d98d65
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      
      
      Installing remote packages using its URL is an interesting case for ALPM
      API. Unlike package sync ('pacman -S pkg1 pkg2') '-U' does not deal with
      server mirror list. Thus _alpm_multi_download() should be able to
      handle file download for payloads that either have 'fileurl' field
      or pair of fields ('servers' and 'filepath') set.
      
      Signature for alpm_fetch_pkgurl() has changed and it accepts an
      output list that is populated with filepaths to fetched packages.
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      16d98d65
  7. 09 May, 2020 5 commits
    • Anatol Pomozov's avatar
      Implement multibar UI · b96e0df4
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      
      
      Multiplexed download requires ability to draw UI for multiple active progress
      bars. To implement it we use ANSI codes to move cursor up/down and then
      redraw the required progress bar.
      `pacman_multibar_ui.active_downloads` field represents the list of active
      downloads that correspond to progress bars.
      `struct pacman_progress_bar` is a data structure for a progress bar.
      
      In some cases (e.g. database downloads) we want to keep progress bars in order.
      In some other cases (package downloads) we want to move completed items to the
      top of the screen. Function `multibar_move_completed_up` allows to configure
      such behavior.
      
      Per discussion in the maillist we do not want to show download progress for
      signature files.
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      b96e0df4
    • Anatol Pomozov's avatar
      Implement multiplexed download using mCURL · 6a331af2
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      
      
      curl_multi_download_internal() is the main loop that creates up to
      'ParallelDownloads' easy curl handles, adds them to mcurl and then
      performs curl execution. This is when the paralled downloads happens.
      Once any of the downloads complete the function checks its result.
      In case if the download fails it initiates retry with the next server
      from payload->servers list. At the download completion all the payload
      resources are cleaned up.
      
      curl_multi_check_finished_download() is essentially refactored version of
      curl_download_internal() adopted for multi_curl. Once mcurl porting is
      complete curl_download_internal() will be removed.
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      6a331af2
    • Anatol Pomozov's avatar
      Inline dload_payload->curlerr field into a local variable · fa68c33f
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      
      
      dload_payload->curlerr is a field that is used inside
      curl_download_internal() function only. It can be converted to a local
      variable.
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      fa68c33f
    • Anatol Pomozov's avatar
      Add multi_curl handle to ALPM global context · dc98d0ea
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      To be able to run multiple download in parallel efficiently we need to
      use curl_multi interface [1]. It introduces a set of APIs over new type
      of handler 'CURLM'.
      
      Create CURLM object at the application start and set it to global ALPM
      context.
      
      The 'single-download' CURL handle moves to payload struct. A new CURL
      handle is created for each payload with intention to be processed by CURLM.
      
      Note that curl_download_internal() is not ported to CURLM interface due
      to the fact that the function will go away soon.
      
      [1] https://curl.haxx.se/libcurl/c/libcurl-multi.html
      
      
      
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      dc98d0ea
    • Anatol Pomozov's avatar
      Introduce alpm_dbs_update() function for parallel db updates · a8a1a1bb
      Anatol Pomozov authored and Allan McRae's avatar Allan McRae committed
      
      
      This is an equivalent of alpm_db_update but for multiplexed (parallel)
      download. The difference is that this function accepts list of
      databases to update. And then ALPM internals download it in parallel if
      possible.
      
      Add a stub for _alpm_multi_download the function that will do parallel
      payloads downloads in the future.
      
      Introduce dload_payload->filepath field that contains url path to the
      file we download. It is like fileurl field but does not contain
      protocol/server part. The rationale for having this field is that with
      the curl multidownload the server retry logic is going to move to a curl
      callback. And the callback needs to be able to reconstruct the 'next'
      fileurl. One will be able to do it by getting the next server url from
      'servers' list and then concat with filepath. Once the 'parallel download'
      refactoring is over 'fileurl' field will go away.
      
      Signed-off-by: Anatol Pomozov's avatarAnatol Pomozov <anatol.pomozov@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      a8a1a1bb
  8. 10 Feb, 2020 1 commit
  9. 23 Oct, 2019 1 commit
  10. 13 May, 2018 1 commit
    • Eli Schwartz's avatar
      Remove all modelines from the project · 860e4c49
      Eli Schwartz authored and Allan McRae's avatar Allan McRae committed
      
      
      Many of these are pointless (e.g. there is no need to explicitly turn on
      spellchecking and language dictionaries for the manpages by default).
      
      The only useful modelines are the ones enforcing the project coding
      standards for indentation style (and "maybe" filetype/syntax, but
      everything except the asciidoc manpages and makepkg.conf is already
      autodetected), and indent style can be applied more easily with
      .editorconfig
      
      Signed-off-by: Eli Schwartz's avatarEli Schwartz <eschwartz@archlinux.org>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      860e4c49
  11. 14 Mar, 2018 1 commit
  12. 06 Jan, 2018 1 commit
    • Andrew Gregory's avatar
      dload: ensure callback is always initialized once · 59bb21fc
      Andrew Gregory authored and Allan McRae's avatar Allan McRae committed
      
      
      Frontends rely on an initialization call for setup between downloads.
      Checking for intialization after checking for a completed download can
      skip initialization in cases where files are small enough to be
      downloaded all at once (FS#56408).  Relying on previous download size
      can result in multiple initializations if there are multiple
      non-transfer events prior to the download starting (fS#56468).
      
      Introduce a new cb_initialized variable to the payload struct and use it
      to ensure that the callback is initialized exactly once prior to any
      actual events.
      
      Fixes FS#56408, FS#56468
      
      Signed-off-by: default avatarAndrew Gregory <andrew.gregory.8@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      59bb21fc
  13. 04 Jan, 2017 1 commit
  14. 05 Dec, 2016 1 commit
    • Martin Kühne's avatar
      Parametrise the different ways in which the payload is reset · e83e868a
      Martin Kühne authored and Allan McRae's avatar Allan McRae committed
      
      
      In FS#43434, Downloads which fail and are restarted on a different server
      will resume and may display a negative download speed. The payload's progress
      in libalpm was not properly reset which ultimately caused terminal noise
      because the line width calculation assumes positive download speeds.
      
      This patch fixes the incomplete reset of the payload by mimicing what
      be_sync.c:alpm_db_update() does over in sync.c:download_single_file().
      The new dload.c:_alpm_dload_payload_reset_for_retry() extends beyond the
      current behavior by updating initial_size and prevprogress for this case.
      This makes pacman reset the progress properly in the next invocation of the
      callback and display positive download speeds.
      
      Fixes FS#43434.
      
      Signed-off-by: default avatarMartin Kühne <mysatyre@gmail.com>
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      e83e868a
  15. 25 Sep, 2016 1 commit
  16. 04 Jan, 2016 1 commit
  17. 01 Feb, 2015 1 commit
  18. 19 Oct, 2014 1 commit
  19. 28 Jan, 2014 1 commit
  20. 06 Jan, 2014 2 commits
  21. 18 Sep, 2013 1 commit
    • Christian Hesse's avatar
      dload: avoid renaming files downloaded via sync operations · 3b3152fc
      Christian Hesse authored and Allan McRae's avatar Allan McRae committed
      
      
      If the server redirects from ${repo}.db to ${repo}.db.tar.gz pacman gets
      this wrong: It saves to new filename and fails when accessing
      ${repo}.db.
      
      We need the remote filename only when downloading remote files with
      pacman's -U operation. This introduces a new field 'trust_remote_name'
      to payload. If set pacman downloads to the filename given by the server.
      
      The field trust_remote_name is set in alpm_fetch_pkgurl().
      
      Fixes FS#36791 ([pacman] downloads to wrong filename with redirect).
      
      [dave: remove redundant assignment leading to memory leak]
      
      Signed-off-by: Allan McRae's avatarAllan McRae <allan@archlinux.org>
      3b3152fc
  22. 29 Jan, 2013 1 commit
  23. 17 Jan, 2013 1 commit
  24. 03 Jan, 2013 1 commit
  25. 20 Feb, 2012 1 commit
  26. 22 Oct, 2011 1 commit
  27. 17 Oct, 2011 1 commit
  28. 14 Oct, 2011 1 commit
  29. 12 Oct, 2011 1 commit
    • Dan McGee's avatar
      Introduce alpm_time_t type · 5f3629be
      Dan McGee authored
      
      
      This will always be a 64-bit signed integer rather than the variable length
      time_t type. Dates beyond 2038 should be fully supported in the library; the
      frontend still lags behind because 32-bit platforms provide no localtime64()
      or equivalent function to convert from an epoch value to a broken down time
      structure.
      
      Signed-off-by: default avatarDan McGee <dan@archlinux.org>
      5f3629be
  30. 29 Sep, 2011 1 commit
  31. 28 Sep, 2011 3 commits
    • Dan McGee's avatar
      Refactor download payload reset and free · e0acf2f1
      Dan McGee authored
      
      
      This was done to squash a memory leak in the sync database download
      code. When we downloaded a database and then reused the payload struct,
      we could find ourselves calling get_fullpath() for the signatures and
      overwriting non-freed values we had left over from the database
      download.
      
      Refactor the payload_free function into a payload_reset function that we
      can call that does NOT free the payload itself, so we can reuse payload
      structs. This also allows us to move the payload to the stack in some
      call paths, relieving us of the need to alloc space.
      
      Signed-off-by: default avatarDan McGee <dan@archlinux.org>
      e0acf2f1
    • Dan McGee's avatar
      Initialize cURL library on first use · 9a58d5c6
      Dan McGee authored
      
      
      Rather than always initializing it on any handle creation. There are
      several frontend operations (search, info, etc.) that never need the
      download code, so spending time initializing this every single time is a
      bit silly. This makes it a bit more like the GPGME code init path.
      
      Signed-off-by: default avatarDan McGee <dan@archlinux.org>
      9a58d5c6
    • Dan McGee's avatar
      Fix memory leak in download payload->remote_name · f66f9f11
      Dan McGee authored
      
      
      In the sync code, we explicitly allocated a string for this field, while
      in the dload code itself it was filled in with a pointer to another
      string. This led to a memory leak in the sync download case.
      
      Make remote_name non-const and always explicitly allocate it. This patch
      ensures this as well as uses malloc + snprintf (rather than calloc) in
      several codepaths, and eliminates the only use of PATH_MAX in the
      download code.
      
      Signed-off-by: default avatarDan McGee <dan@archlinux.org>
      f66f9f11
  32. 25 Aug, 2011 1 commit