My first foray into Objective-C was, for lack of a better description, a sink-or-swim situation. I was working for a previous employer and our lead iPhone developer had just been laid off; my old boss was in my office the next day asking me how quickly I could "get up to speed". "You know Ruby", he said, "How difficult could it be?" It was time to get some books.
The first point I would like to raise is that Objective-C, while itself quite elegant (at least in comparison to its namesake), is fairly useless on the Mac platform without the Cocoa framework. And it is this framework that I think a lot of Rubyists get hung up on. The other big sticking point is manual memory management through the use of
Cocoa's roots start all the way back in the 1980s with the NeXTSTEP operating system which tagged along with Steve Jobs when he was tapped to lead Apple again in 1996. This is why Cocoa's core classes, such as NSArray and NSString, all begin with 'NS'. The naming scheme is a holdover from that earlier OS and while those two extra characters may not seem like much hassle, to a Rubyist they represent an unnecessary burden of verbosity. In addition, Cocoa makes extensive use of the delegate pattern, something that is rarely seen or needed in Ruby and can make it difficult to trace an execution path for those unfamiliar with the concept. One of the limitations of Objective-C, the
inability to create difficulty in creating a function with a variable length argument list, is commonly resolved through the use of the poorly named hash
userInfo, which frequently appears in method definitions without any connotation as to its purpose. And lest we forget those wonderfully verbose method names, I think even the most die-hard and grizzled veteran of Objective-C would agree that NSString's
stringByReplacingOccurrencesOfString, could have been better-named.
Rubyists are proud of that fact that they don't have to worry about memory management. The more knowledgeable Rubyists could tell you that the garbage collector, or GC, works by continually scanning objects in memory once a process has accumulated eight megabytes worth, checking to see if there are any pointers to those objects and then releasing them back to the OS if they do not. But most Rubyists would refuse to venture any farther down that dark path of memory management out of a simple need to retain their sanity. Indeed, for a good few weeks I struggled with this concept until my fellow iPhone student Paul Barry introduced me to a book that would change my outlook. Titled "Learn Objective-C on the Mac", it proved to be a treasure trove of information on object allocation. Specifically, chapter nine, which dealt with memory management, made it crystal clear what was going on underneath the hood when an object was created, and thus retained, and when it was released. The concept itself is simple: retaining an object increases its "retain count" by one; releasing it reduces that count; and when it reaches zero that space in memory is released back to the OS. Immediately the seemingly-random crashes my applications faced were decipherable and easily fixed while my hostility to Objective-C and the Cocoa framework melted away.
As Rubyists, we tend to value the simple over the complex and prefer not to sweat the small stuff. Yet on a whole we also desire learning new concepts and many of us can attest to that being the driving factor behind leaving a former language of choice behind. On occasion, such as with Objective-C and Cocoa, our preference for simplicity and our desire to learn collide, head-on. But rather than tweet about how ugly Cocoa looks or how memory management in Objective-C is beneath you, I challenge you to dive further. After all, Ruby itself is built on Objective-C's forebear, C, and no programmer has walked away worse for wear after peeking under the hood. Learning Objective-C not only opens up the world of iOS application development but also makes us better Rubyists.