Some comments from my end… Not a “dummy walkthrough” though since my mdadm experience is a bit rusty. So take those as advices only please, don’t execute any commands I give without verification!
Your md0 is indeed a RAID-1, with three disks according to mdadm. Was that intentional to set it up like that? It is possible of course, having multiple mirrored disks. It seems that only sdb1 and sdc1 are active in md0, and sda2 is set up as hot spare.
Where do you see that their “sizes do not match”? Sizes of RAID member partitions MUST match (okay, the smallest one dictates the size for the array actually).
Unfortunately, your code printout of the proc/mdstat file was partially garbled due to forum bugs, it inserted a link where important information should be. You might want to check and fix that. (@Eric: Is it possible to get those forum bugs fixed? In code, no links or other stuff should be inferred.) According to the mdadm documentation, the partitions in proc/mdstat are appended their driver ordering number in square brackets, and “(F)” follows if that drive is failed.
To get details about the failed drive, you can use the commands “mdadm -E /dev/sda1” (examine, to be used with physical partitions) and “mdadm -D /dev/md0” (details, to be used with md devices). Make sure you find out there which drive actually failed.
To remove them from the array, you IMO wouldn’t need the “–fail” command, since the drive already failed, but the “–remove” one. Check “man mdadm” for details; also the -E and -D should tell you more. You’ll need “–fail” only if the defective disk is member of other md devices and has not failed there.
About sda being an IDE disk: Make sure that is really the case. Old IDE disks usually get “hdX” as device nodes and not “sdX”.
Do NOT replace the drive while the server is running, except you have a SATA controller and power connectors that are specifically meant for hot-swapping!
Whether the server will boot again, before and after you remove the defective disk, depends on whether the boot loader (GRUB?) is installed on all the RAID members. If you configured the RAID during OS installation, the installer should have done that for you, otherwise you’ll want the commands “grub-install” and “update-grub”. Check their man pages; I hope those apply to your CentOS, I’m using Ubuntu/Debian.
Also your BIOS needs to be configured to boot not only from the first HDD but from subsequent ones, in case the failed disk is the first in your system.