After a lot of thinking I have decided to merge my two blogs previously at www.vmwhere.net and jayakumar.co.in into a single site/blog here at www.jayakumar.org.
The reason was simple, am lazy and do not want to maintain two sites. Plus after I moved out of VMware late last year I don’t feel the need maintain a separate blog for VMware so as maintain that thin distinction between work (VMware) and personal (linux/networking) stuff. Also since I have joined Cisco I wanted to blog bout Nexus 1000v and Cisco UCS products, but the mere thought of setting up and maintaining another blog kept that idea far. So theoretically I have merged 3 blogs into one.
Thanks to a catastrophic failure during migration from a private server to a shared server, all the site downloads are gone the blogs posts, comments etc basically everything that was on the database we were able to recover from a db backup, but anything on the filesystem is gone. So all the virtual machine images are gone as well so no more virtual machine downloads
I will try to spin up a few of the latest gentoo and slackware versions that I have skipped earlier on in the next few weeks and hopefully we will have some downloads then.
Sorry for the inconvenience, if you came here looking for quick access to some linux virtual images .
I have a small NAS box with 6 x 1TB HDD in RAID 6 and 4 x 500GB HDD in RAID 5. Recently thanks to the arrival of my baby girl and a HD handy cam, I was running out of space fast on the array and so when when I saw a decent deal for 1TB drives in NewEgg I picked up couple to add to the RAID 6 volume.
Growing a raid array in linux using mdadm is easy. I made sure I used fdisk to create single large partition on the drive and mark the partition type fd (Linux raid autodetect) prior to adding it to the array.
Device Boot Start End Blocks Id System
/dev/sdm1 1 121601 976760001 fd Linux raid autodetect
Adding the drives to the raid is straightforward
mdadm --add /dev/md1 /dev/sdl1
mdadm --add /dev/md1 /dev/sdm1
an awesome new feature in the just released ESXi 4.1 is the ability to do PXE boot and script the install using a kick-start script. If you looking for information on how to do this see here at billhill’s post.
However while adding that feature VMware seems have slightly broken the PXE boot, but manual install ability of ESXi. This is useful if you have lights-out lab and want to use the local pxe server instead of those virtual media options but also want to customize the install options as the lab servers do not have standardized components and hardware and may other issues.
Refer to the above link for overall PXE/dhcp/tftp install steps. I plan only to document what’s different for a manual install.