help-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Workshop to save M$ Windows users - help needed


From: Tomas Hlavaty
Subject: Re: Workshop to save M$ Windows users - help needed
Date: Mon, 04 Oct 2021 20:29:50 +0200

url-retrieve-synchronously2 had a bug

this works and should dispose the nework buffer properly:

(defun url-retrieve-synchronously7 (url filename)
  (let (z (again t))
    (url-retrieve url
                  (lambda (status)
                    (setq again nil)
                    (when status
                      (setq z status)
                      (let ((coding-system-for-write 'raw-text-unix))
                        (write-region (point-min) (point-max) filename)))
                    (kill-buffer)))
    (while again
      (sleep-for 1))
    z))

(url-retrieve-synchronously7 "https://logand.com"; "/tmp/e7")
(url-retrieve-synchronously7 "https://logand.com1"; "/tmp/e7")
/tmp/e7

There is a comment in url-retrieve-synchronously about sleep-for and
similar suggesting that that might not always work but it works for me.
Not sure about windows.

I think url-retrieve-synchronously should not ignore the status but
return it instead.

What other issue did you have with url-retrieve-synchronously apart from
not being able to detect errors?  Your link to the thread discussed only
preference for libcurl.  I did not find what is wrong with
url-retrieve-synchronously there.  Another thing wrong with that is
probably that the network buffer is returned but iirc buffers are not
garbage collected, or something like that.  That's why I put kill-buffer
there.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]