The backup is non-optional IMHO. If it was me, I'd do the backup, fsck, and convert to ext4 while I had the outage.
Might save you in future. Exactly, that's what we're thinking, just because it says it's clean, doesn't necessarily mean it's clean, I'm sure a check would find some minor issues at least. We do have a good backup, so that part I'm not worried about. I was more worried if there was a way to figure out how long it might actually take to check all the volumes, or even if I have to do it volume by volume from single-user mode, how long each volume would take.
I want to avoid saying "the system will be done for 6 hours" when there was a way I could have figured out beforehand it really will be down for 24 hours while the checks run. Hi there, Quote:. Originally Posted by syg Let's hope the backup you take immediately prior to this is good. That difference may prove to be hugely significant. You can run "fsck -n -f" quite safely on a mounted filesystem. You will get a good idea of how long the check takes, but unless you are familiar with the inconsistencies that are to be expected on a read-write mounted filesystem the results won't be terribly meaningful.
If by any chance there are filesystems that can be temporarily mounted read-only, you can get a valid check. If that check happens to report that the filesystem is clean, you can run "tune2fs -C 0 -t now" on the containing device to record that check in the super block.
But never allow fsck to make any repairs on a mounted filesystem, even one that is read-only. That can cause serious problems, including a possible kernel panic. You mention that the whole thing is considerably faster with ext4, and I've heard other people say the same. Is there any rule of thumb as to how much faster an fsck would be? A 10th of the time?
Mere seconds?? I think it's between 10 and 20 minutes. Also, since you've a dual drive, you should be able to run the fsck on each drive in parallel. Quote: I think it's between 10 and 20 minutes. Quote: Quote: I think it's between 10 and 20 minutes. Did you format with the sparse superblock?
Yeah, but your filesystem was unmounted cleanly. I don't know about Doug's or what situation Trevor was referring to. My situation was caused by an emplode failure during re-synch lost network connection so it got corrupted while it was in RW mode.
The manual fsck on each drive and manual database rebuild fixed everything. I connected a monitor and there was no video output. I can connect a keyboard. I will try that. I am afraid to turn off the power, but it may come to that. And if, for some reason, fsck hasn't completed, then the forcefsck file will still be present and it will start again.
Nothing appeared on the monitor, but the system completed its boot! I have lights on my pi to indicate network activity, which started to flash. I was then able to connect with ssh. So apparently it was waiting for some kind of response from me, which it couldn't get because the network wasn't initialized yet. What filesystem are you using? Why are you running a production web server on a single disk? Servers with single disks aren't servers-- they're ticking time bombs.
It sounds like that disk is dying. Migrate your data to a real RAID array with a hardware battery-backed controller immediately. Add a comment. Active Oldest Votes. Improve this answer. Seth Noble Seth Noble 1 1 silver badge 5 5 bronze badges. I don't see his hdparm benchmark having much to do with "parallel access". It sounds more, to me, like he's got a failing disk.
It was faster in the past and now it's not. Probably because it's relocating sectors. Based on the very slow baseline of 80 megabits per second, I was assuming the test was run on an active system. I would expect 5 hours for the fsck to complete. I would instead consider that means: testing, testing and testing a migration to reiserfs. No fsck can fix corrupted filesystem metadata, not a broken disk, nor is it a defragmentation tool. Depends on the filesystem.
0コメント