I have a question about the site - I have magazines & books etc that I am planning on scanning and uploading to the site if they aren't already there, and I'll probably bin them afterwards (unless someone wants them) - but what I would hate to happen is for me to bin the magazines and then for this site to disappear. I know that's not likely to happen, but it may. So, is it OK to download all the 'downloads' and keep a local copy in case of disaster? If so, is there any better (official) way of doing this than just clicking each file and downloading it?
(I think you've got all the magazines I have, but I have some books - eg TRS-80 Colour Computer Technical Reference Manual & Inside OS9 Level 2 which you don't seem to have)
Mirroring the site?
Re: Mirroring the site?
Thankfully, although I don't visit very often, I have no plans to retire the site. There are a number of great guys on here who contribute content and WIKI pages and generally keep it going.
As to your books, please check http://www.colorcomputerarchive.com/ to see if they have already been done by the CoCo guys.
As to your books, please check http://www.colorcomputerarchive.com/ to see if they have already been done by the CoCo guys.
Simon Hardy
Re: Mirroring the site?
Plans and intentions is one thing, but do you have a bus-factor emergency plan, for instance a backup that also somebody else can access? I have seen more critical (I was almost to say important ) sites disappear through unforeseen events.
- rolfmichelsen
- Posts: 299
- Joined: Wed Apr 08, 2009 8:43 pm
- Location: Oslo, Norway
- Contact:
Re: Mirroring the site?
Tormod,
I'm keeping a backup of the downloads section. Might want to add a backup of the wiki as well.
-- Rolf
I'm keeping a backup of the downloads section. Might want to add a backup of the wiki as well.
-- Rolf
Re: Mirroring the site?
That would be nice!
I have a copy of all my scans and dumps, but not of the wiki pages.
Maybe a full site backup would be a nice thing to do every few months.
I have a copy of all my scans and dumps, but not of the wiki pages.
Maybe a full site backup would be a nice thing to do every few months.
Re: Mirroring the site?
That's a good point, maybe I need to sort something out in case of emergency where the backups are put into Dropbox or similar and shared......
Rob/Rolf - What would be your preferred option.... That way you get the full SQL backup and files/folders for the site.
Rob/Rolf - What would be your preferred option.... That way you get the full SQL backup and files/folders for the site.
Simon Hardy
Re: Mirroring the site?
Depending on the size of the hard drive containing the archive, maybe a full disk image, compressed and encrypted, and maybe upload it to mega (just an example).
I think that would be quite fast and done during the night no user will notice any interruption of service.
I think that would be quite fast and done during the night no user will notice any interruption of service.
- rolfmichelsen
- Posts: 299
- Joined: Wed Apr 08, 2009 8:43 pm
- Location: Oslo, Norway
- Contact:
Re: Mirroring the site?
Or just do a database dump every now and then and store the result in a directory in the downloads section... I'm mirroring the entire downloads section from time to time (differential copies), so that would just "automatically" work. Anyway, I'm flexible. The most important thing is to have a disaster recovery plan of sorts.
-- Rolf
-- Rolf
Re: Mirroring the site?
Maybe something like using rsync or from time to time?
There are lots of options so we should go for a reasonable one and see if it fit our needs or not.
As Rolf said, the important thing is to have a disaster recovery plan.
There are lots of options so we should go for a reasonable one and see if it fit our needs or not.
As Rolf said, the important thing is to have a disaster recovery plan.
Re: Mirroring the site?
If the pages would use clean urls, then a simple "wget --mirror ..." can do the job.
But there are ugly URLs with GET parameters. Think wget can't change them to a useable state...
But there are ugly URLs with GET parameters. Think wget can't change them to a useable state...
... too many ideas and too little time ... Related stuff written in Python:
Dragon 32 emulator / PyDC - Python Dragon 32 converter: https://github.com/jedie/DragonPy
DWLOAD server / Dragon-Lib and other stuff: https://github.com/6809
Dragon 32 emulator / PyDC - Python Dragon 32 converter: https://github.com/jedie/DragonPy
DWLOAD server / Dragon-Lib and other stuff: https://github.com/6809