There is a webcomic called strong female protagonist that i want to persevere(in case the website is ever lost) but not sure how.

The image you see above is not a webpage of the site but rather a drop-down like menu. There is a web crawler called WFDownloader(that i am using the window’s exe file inside bottles)that can grab images and can follow links, grab images “N” number of pages down but since this a drop-down menu i am not sure it will work

There also the issue of organizing the images. WFDownloader doesn’t have options for organizing.

What i am thinking about, is somehow translating the html for the drop-down menu into separate xml file based on issues/titles, run a script to download the images, have each image named after its own hyperlink and have each issue in its own folder. Later on i can create a stitch-up version of the each issues.

  • Archr@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    8 hours ago

    Usually when I need to do something like this I use python and BeautifulSoup4. You basically get the content of the web page and use bs4 to parse it and pull out the correct link. You will need to look at the source of the page to understand their page format.

    If python requests isn’t able to get the right data then you might need to use selenium to use a full web browser to render the page and run any Javascript that might populate the page. Then you send that page content to bs4.

    Edit: I know someone posted a link to archive but I figured some instructions would also be useful.