Getting Facebook iPad App Back after Update

Wednesday, September 21, 2011 at 8:04 AM
I've been using the Facebook iPad application for the last few weeks and love it! I was caught off-guard when Facebook updated their app to v3.5, which removed all iPad support completely. Here's a quick tutorial on how to get it back!

1.) jailbreak your iPad (pretty sure you've already done this if you were using the iPad application)
2.) delete your existing facebook application from iTunes & your iPad
3.) sync iTunes & iPad (just in case)
4.) get the "Installous" application (google it)
5.) download an older copy of the facebook.ipa file (google it, look for version 3.4.4)
6.) copy the facebook.ipa file to /private/var/mobile/Documents/Installous/Downloads/ on your iPad (use scp/ssh, or iphone explorer)
7.) open Installous, go to Downloads, and install the facebook.ipa
8.) install "Face Forward" from Cydia. Respring after installation.

There you go. Now go waste your time on Facebook!

Active Twinax with Intel 10GbE NICs?

Friday, September 2, 2011 at 11:46 AM
"There are two kinds of people in this world..." I hear this from my father-in-law on a weekly basis. It's usually followed by some quip like "the quick and the hungry".

Twinax cabling (aka direct-attach SFP+) is a cost-effective way to interconnect switches, servers, and storage at 10GbE. Current implementations are limited between 5M and 10M. The economics are quite compelling when compared with discrete SFP+ optics. Well, there are 2 kinds of 10GbE twinax cables in this world: active and passive. (My father-in-law is beaming right now). Unfortunately, not all NICs or switches support both standards.

I was helping a client to migrate their VMware ESX 4.x servers to redundant 10GbE interfaces. They had Dell servers with OEM Intel NICs. They had done some initial testing with Dell's 24-port 10GbE switch, but ended up selecting Brocade's VDX6720 Ethernet Fabric network for redundancy and growth prospects. Once we had everything cabled and powered-up, there were connectivity issues. No link-up and the VMware host gave us purple-screens on shutdown.

A little poking in the /var/log/vmkernel logfile showed this:

"vmnic8: ixgbe_sfp_config_module_task: failed to load because an unsupported SFP+ module type was detected."

Well, come to find out that Intel's 10 Gigabit AF DA Dual Port Server Adapter isn't compatible with the active twinax cables that came with the Brocade VDX switches. The VDX switches don't support the passive cable that came with the Intel 85298-based 10GbE NICs.

This customer also had a handful of the newer Intel Ethernet Server Adapter X520 Series sporting the newer Intel 85299 ASIC. We still had problems getting things working. A little more digging shows that active twinax support for the x520 series was a semi-recent addition and according to Intel, requires driver version 15.3 or later. We spun up a copy of vSphere 5.0 hoping that it included a newer version of ixgbe. It sure does and we now have 10GbE over active twinax between the Brocade VDX6720s and the Intel x520 NICs.

Moral of the story: get your active/passive cables straight. Make sure your NIC & switch support the same standard. Make sure your drivers are recent enough (at least for the x520) to support the correct types of cable.

Fix iFolder Client Crash in iFolderShell.dll

Wednesday, August 17, 2011 at 2:31 PM

I love iFolder, Novell's file synchronization solution. It keeps a handful of folders synchronized across multiple machines including my desktop and 2 laptops.

There hasn't been much development on the open-source side of things and some recent Microsoft updates combined with having iFolder installed have lead to nuisance application crashes. Usually it's after I save a file and exit a program (like notepad) and the offending module is "iFolderShell.dll".

There are some discussions on the issues over @ Novell's forums. It looks like 3.8.4.1 and 3.9 beta clients may have solved this problem - but those clients are only available to Novell's "paying" customers.

There is one way to fix the problem: disabling iFolder shell integration. The only effect that I've seen so far is that I don't see the iFolder icon on the folders that I'm currently synchronizing with iFolder - but other than that things are staying synchronized. Save the following paragraph into a .reg file and double-click it. Hopefully this solves your client crashes like it did for me. Include the REGEDIT4 line below:

REGEDIT4
[-HKEY_CLASSES_ROOT\*\shellex\PropertySheetHandlers\iFolder]
[-HKEY_CLASSES_ROOT\Folder\shellex\ContextMenuHandlers\iFolder]
[-HKEY_CLASSES_ROOT\Folder\shellex\PropertySheetHandlers\iFolder]
[-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers\iFolder0]
[-HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers\iFolder1]
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Shell Extensions\Approved]
"-{AA81D831-3B41-497c-B508-E9D02F8DF421}"
"-{AA81D830-3B41-497c-B508-E9D02F8DF421}"

Or you can download the .reg file here: ifolder-shell-disable.reg

nSpaces couldn't register the hot key MOD_WIN:S

Monday, August 8, 2011 at 2:17 PM
Thought I'd give nSpaces a try since I'd heard great things about it. Unfortunately, after installation, I get a pop-up box that states "Couldn't register the hot key MOD_WIN:S".

I sincerely hope you didn't come here looking to fix it, since I have no idea what's causing it. Since there were 0 Google hits on that phrase, I thought I'd throw this out there so you would know that I also feel your pain.

Alienware m14x Startup Problems after adding 2nd Hard Drive

Wednesday, August 3, 2011 at 11:05 PM

I picked up an Alienware M14x laptop not too long ago. I never use optical media - so I re-purposed the optical bay to hold a hard drive. I took the original 500GB Seagate SATA drive and put it in this secondary bay, and then upgraded the primary HDD bay with an SSD drive using one of these. You can read more about the process with pics, etc. at notebookreview.com.

Shortly after doing this I experienced startup problems with my M14x. It would take a long time to boot, usually hanging for a very long time at "Loading Windows". It wasn't every time - but more often than not I would have to forcefully power the laptop off and try to get it to boot again.

Some posters on the notebookreview.com forums believe there are issues running SATA 3Gbps speeds on the secondary SATA connector. I did a little digging around and it looks like Seagate has a jumper on this particular drive that forces SATA 1.5Gbps speeds. More information can be found at seagate.com. See section 3.2 for a picture.

I didn't have any "mini" jumpers, so I took a standard-sized jumper, tweaked it with a pair of needle-nose pliars, and jammed it in there. Since then, I've booted successfully 10 out of 10 times. Your mileage may vary.

Consumer or Enterprise Drives for RAID? (Part 2 of 2)

Monday, July 4, 2011 at 9:03 AM

In my last post, I described a couple of "ideal" scenarios that involved a standalone consumer-class hard drive, along with enterprise-class drives connected to a RAID controller. For the big finale, let’s look at the non-ideal scenario:

Scenario #3: Let’s say your data is stored on a RAID array using consumer-class drives.

You go to print your paper and one of the hard drives is unable to read a sector. What happens now? As mentioned previously, consumer-class drives don't give up quickly. On the flip-side, RAID controllers don't have much patience. After a handful of seconds, the controller says “that drive is not responding to commands so it must have failed, I’m going to kick it out of the array and get on with my day”. The controller detaches the drive from the array and recreates the missing data from the remaining drives and you’re able to print your paper.

So far that’s not such a bad thing, at least as far as your paper is concerned. You were able to print it out and go on your way. Due to the nature of RAID, all you should have to do is put a new hard drive back into the array and it will rebuild your parity data from the other drives. Right?

Unfortunately, this leaves you in a somewhat precarious position. The data on your array is now at risk (assuming RAID5). You don’t have any redundancy until the array can be completely rebuilt. What are the chances that you’d have 2 drives fail at the same time? Pretty low. What about the chances of there being a single read-error on one of the two remaining consumer-class drives during the rebuild process? Much greater. And guess what happens when one of those other drives encounters a read error, takes heroic measures to get it, and the controller kicks it out of the array? Very, very bad news for your data. Kiss it goodbye and you better have backups.

Scenario #4: Let’s compare this same situation using enterprise-class drives. You go to print, there’s a read-error on one of the drives, the drive gives up after 7 seconds and notifies the controller, the controller recreates the data from that one sector by using data from the other drives and ALL of your drives stay in the array! The controller can re-create the missing data from the other drives, write it somewhere else on this other drive, and you’re as good as new!

The moral of the story is: TLER/CCTL/ERC ensures that your hard drives stay in the array even when they encounter an error. Consumer-class drives are much more likely to be kicked out of an array under similar circumstances – and that’s bad news for your data.

This happened to me with some slight variations. I was using RAID6, which preserves data even with 2 drive failures. When one drive failed, I replaced it with a different one. During the rebuilt, another drive was kicked out of the array, and during a subsequent rebuild a 3rd drive was kicked out as well. This toasted the data on my array. It took weeks, $$$, and a lot of time to gather that data back together – probably a lot more than the cost delta between consumer-class and enterprise-class drives, and definitely more than a decent backup solution.

I've since moved to a ZFS-based storage appliance (NexentaStor) and religiously backup all of my data.

Consumer or Enterprise Drives for RAID? (Part 1 of 2)

Wednesday, April 27, 2011 at 9:55 AM

Enterprise-class hard drives have a few features that make them more appropriate for use in RAID arrays. One well-known technology is Western Digital's Time Limited Error Recovery (TLER). Samsung has something similar in their Command Completion Time Limit (CCTL), and Seagate calls theirs Error Recovery Control (ERC).

What's the big deal about TLER/CCTL/ERC? Feel free to hit the links above if you would like the long-winded manufacturer's answer. The short of it is that these hard drives will "give up" fairly quickly when they experience a read error.

I'm sure you're thinking "WHAT??? IT GIVES UP MORE QUICKLY?" Yes, in a RAID array, giving up quickly is a good thing(tm). Let's look at two different scenarios:

Scenario 1: You spent all week working on a paper for school. The document was stored on your hard drive in a sector that's just starting to develop a problem. The next day when you go to print it, the hard drive is unable to read from the sector where your paper was stored.

Pop Quiz: What do you want your hard drive to do?
- A.) give up after a few seconds and say “sorry, you’ve lost your paper”
- B.) be heroic, keep at it, attempt reading the failing sector over and over again until the data is recovered, no matter how long it takes.

Yes, “B” was the correct answer, and that’s exactly what consumer-class hard drives do. When they experience a read-error, they keep at it. I'm not sure how long, but as far as you're concerned, it can take as long as it wants because you need that data.

Scenario 2: Same situation, but instead of storing your paper on a consumer-class hard drive, you store it on a RAID array using enterprise-class drives with TLER/CCTL/ERC. When it comes time to print the paper, one of the hard drives is unable to read one of the sectors where your paper is stored. This isn’t a problem because the drive will give up after a few seconds. The drive will notify the RAID controller that it couldn’t read a sector. The RAID controller will then recreate the missing data from the other drives. You print your paper and you’re off to school. In this situation, giving up quickly is a good thing. You only had to wait a couple of seconds and you were able to print your paper.

In Scenario #1, while the drive was attempting to get your data, the computer is unresponsive. It acts like it's frozen, and it can't do much else until it's able to complete reading your file - and that's okay. You want that file and there's only one place to get it.

In Scenario #2, delays like this are completely unacceptable. Can you imagine if 100's of employees had to sit around twiddling their thumbs for who-knows-how-long because "the server" (with a consumer-class drive) took heroic measures to read a failing sector? Lost productivity. What about 100's of customers on your website trying to buy stuff? Lost revenue. Either way you want that drive to quickly give up so that the RAID array can do its thing and your server can get back to supporting your business.

There's one more scenario, the one where you use consumer-class hard drives in the RAID array. That's a longer story and one I'll save for my next post. Let's just say that a drive taking "heroic measures" isn't awarded any medals by the RAID controller. Instead it is taken out back and shot in the head. It's not a pretty sight.

Error 800700c1 with Veeam FastSCP & 64-bit Windows

Monday, April 25, 2011 at 7:25 AM

Veeam FastSCP is an excellent utility to move files on/off your ESXi host. It's 2x faster than just NFS-mounting the datastore directly from my Windows box.

I was trying to write some files to my datastore recently and got this error message:

“Retrieving the COM class factory for component with CLSID [5F1555F0-0DBB-47F6-B10B-0AB0E1C1D8CE} failed due to the following error: 800700c1"

The short fix is detailed at Everything-Virtual. Hope that helps someone.

Protools 8 LE and BFD Lite in Windows 7 x64

Wednesday, April 13, 2011 at 7:18 PM

Helping a family friend with a perplexing problem. New computer, Windows 7 x64, Protools 8, and an old plug-in called BFD Lite. Put them all together and you get an error every time you try and load a song that says:

BFD Lite folder could not be found "" (or something like that).

I tried BFD Lite versions 1.0 and the 1.5 update too - no luck. Every time I would start Protools 8 LE, I'd get the folder not found error message. I tried installing in different folders, changing permissions, etc. etc. etc. , and nothing worked. I finally stumbled on the issue - Protools LE needed to be run in "XP Compatibility Mode" in order for the BFD Lite plug-in to function properly. Once I got that turned on and selected the BFD Lite folder one more time, things started working great.

VMware ESXi 4.1 & VT-d w/ Supermicro X8SAX

Sunday, February 20, 2011 at 9:57 PM


I currently have 2 servers in the house... One of them is a NexentaStor storage appliance, and the other is a Windows 7 box (also running VMware Server 2.0 with a handful of VMs). I was interested in consolidating the two of them into a single server since neither box was being heavily utilized. After reading this page, I decided to base the single server off of VMware's ESXi product. The first VM would need to be the storage appliance, and then the rest of the VMs would NFS mount the storage appliance and boot from there. This setup should handle everything I need it to, and if there was a catastrophic failure, I could pull the drives and put into another ZFS system and recover the data.

VMware doesn't make it easy to create Raw Device Mappings using SATA disks. That's okay, ESXi has a feature which allows you to pass through hardware on the host machine directly to the guest. This sounds great. I planned on passing a couple of Intel SASMF8I LSI1068-based controllers to the storage VM. I have a Supermicro X8SAX motherboard, Intel i7-930 CPU, and 24GB of RAM (6x4GB DDR3).

In order to enable hardware passthrough, you have to go into the BIOS and enable Intel VT-d. Unfortunately, any time I enabled this feature ESXi refused to install. If I installed ESXi first and then enabled the feature, it refused to complete booting. I went through all sorts of BIOS settings, ACPI, Intel Virtualization technology, etc. and couldn't find a combination that worked. I upgraded the bios from 1.1a to 2.0 and that still didn't work. I tried ESXi 4.1 along with 4.1U1.

What finally worked was downgrading the BIOS all the way back to 1.0c! The only downside so far is that it only recognizes 20GB of RAM instead of 24GB. Other than that, the hardware passthrough works perfectly. 1.0c isn't available on Supermicro's website. Good luck in finding it... I grabbed it from some website in China after some extensive searching at google.

I've got an e-mail into Supermicro asking why. I'll edit this post if I get an explanation from them.

Installing VMWare ESXi v4.1 from USB Stick

Tuesday, January 25, 2011 at 4:52 PM
HERE is how you can install VMWare ESXi v4.1 from a USB memory stick. I had previously tried just writing the .ISO image using unetbootin, but that didn't work. This other method works great, though!

Line Rate | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com