Prior Inclination

Earlier this year, Alex Honnold scaled El Capitan without ropes. That was an impressive & dangerous feat by Alex.

What stuck with me is this quote about Alex from Tommy Caldwell:

Alex once told me that he had never fallen completely unexpectedly—meaning without at least some prior inclination that it could happen.

That is amazing and shows that Alex is simply operating at a higher level. I would make the blanket generalization that most climbers have fallen while climbing without anticipating it.

Why this sticks out to me is when I apply it to other fields. Outdoor climbing can easily be a life or death ordeal. Software generally is not life or death during development.

Can you imagine a programmer who writes code that doesn’t crash without the programmer having some prior inclination? That sounds impossible right? Or a slow development cycle.

I’m not saying that programmers need to be able to anticipate every crash ever. But if someone were able to never have their code crash without prior inclination, that’s awesome.

Supporting the iPhone X with Storyboard

There are a ton of guides out there for updating your app(s) to support the iPhone X.

If you create your view programmatically, you can use iOS 11’s safeAreaLayoutGuide. If your app targets iOS 10 or below, you can use the availability condition, #available().

With the Storyboard, one thing I appreciate from Apple is making the safe area layout guide backwards deployable.

Apple told us in WWDC 2017 Session 412 that Storyboards using safe areas are backwards deployable. This means you can switch to using the safe area layout guide in Interface Builder even if you still target iOS 10 and older.

via https://useyourloaf.com/blog/safe-area-layout-guide/

I don’t always use the storyboard for my layouts, but for apps that I need to update, this backwards deployability helps a lot.

Rob Pike’s dream setup

I recently came across a short article on Uses This.

Rob Pike talks about his setup. His response for his dream setup & having stateless devices really intrigued me.

Twenty years ago, you expected a phone to be provided everywhere you went, and that phone worked the same everywhere. At a friend’s house, or a restaurant, or a hotel, or a pay phone, you could pick up the receiver and make a call. You didn’t carry a phone around with you; phones were part of the infrastructure.

– Rob Pike
https://usesthis.com/interviews/rob.pike/

This is hard to imagine today or in the future for computers. A world where you didn’t have to carry around a mobile phone. You simply used the mobile phone wherever you went. For many reasons, what Rob talks about for phones (applied to computers) doesn’t seem likely.

But it’s nice to imagine a world where you didn’t have to update your phone every couple of years. You could rely on a device at home or at work to pick up where you left off, without having to lug something expensive around and keep it charged.

ARKit Impressions

I’ve been working with ARKit recently. I am planning on releasing an AR basketball game when iOS 11 is released.

Here are misc thoughts about working with ARKit:

  • It’s hard to find answers to common questions about doing simple things in ARKit. Searching for SceneKit yields slightly more results, but even that is sparse. The Apple developer SceneKit & ARKit forums don’t appear to have much activity either. So it’s up to StackOverflow & random Internet blog posts
  • Working with ARKit means working with SceneKit. SceneKit is Apple’s framework to make working with 3D assets easier for developers. Working with SceneKit & 3D is something that I’m new to. A lot of the math around position, orientation, euler angles, transforms, etc. can get complex fast when it involves matrix transforms and quaternions.
  • It’s really hard to find assets for DAE/collada. The DAE format is meant to be interchange format for various 3D software to communicate with each other. The reality is that exporting to DAE or converting from one format to DAE is a crapshoot. I’ve used Blender briefly to look at 3d assets, but digging into 3D modeling is a huge time sink for some one looking to get involved in ARKit. I wish there was an online store that focused on selling low poly (<10K), DAE files.
  • Related, working with 3D assets as someone new to 3D assets is very frustrating. The concept of bounds vs scaling as they relate to importing into your SceneKit scene was very challenging (with the 3D model that I imported). If you have your own in-house or contracted 3D modeler, you should get 3D assets that work well with SceneKit, but I had countless issues with off the shelf 3D models & file formats.
  • After you’re able to import your 3D model, modeling the physics geometry can be a challenge. SceneKit allows you to import the geometry for your physics body as-is using ConcavePolyhedron, but you probably don’t want that. I had to manually recreate a basketball hoop using multiple shapes combined into a single SCNNode.
  • ARKit is not all powerful. The main feature that ARKit gives you is horizontal plane detection. Occlusion doesn’t come with ARKit. Expect many apps that deliver an experience reliant on a plane/surface like your desk or the floor.

ARKit is exciting, but don’t expect the world yet. Future ARKit releases & better iOS hardware should provide more compelling experiences. Today, you can expect to play with 3D models on a surface (with surface interaction) or in the air (with limited or no environment interaction).

CLI Cut Visual Option

Something I came across recently was command line text manipulation with a CSV. The way that the list option is passed in is cool.

For demonstration purposed, we have a contrived text document “dummy.txt” that happens to be delimited by the % character. The contents inside the file are:

name%car%temp%color
john%honda%fair%blue
tom%benz%fair%red
ed%bmw%cold%green

To get the first column of data, you can run

cut -d% -f1 dummy.txt

which gives you:

name
john
tom
ed

If you wanted to save the output, the standard command line “>” comes in handy.

To get the columns up to (and including the) 2nd column, you can run

cut -d% -f-2 dummy.txt

which gives you:

name%car
john%honda
tom%benz
ed%bmw

To get the 2nd & 3rd columns, inclusive, you can run

cut -d% -f2-3 dummy.txt

which gives you:

car%temp
honda%fair
benz%fair
bmw%cold

To get the 3rd column onward to the last column, you can run

cut -d% -f3- dummy.txt

which gives you:

temp%color
fair%blue
fair%red
cold%green

The examples above are just for this demo, but I think the hyphen syntax in the list fields option is easy to learn and visually clear (for a CLI interface).

Crosswalk Aides

I recently went on vacation in Europe. When I visit a new place, I try to get a feel for the new environment by walking around everywhere. Things like the OK to cross icon always amuse me since they are different.

London has a ton of history (old buildings), but I found that it exceeded my expectations for modern accessibility. The signage throughout the subway and public areas (train stations, etc) was really easy to follow.

In the UK, cars drive on the left side of the road. This is the opposite from the US. This means people coming from the US have to look on the other side for oncoming traffic while crossing the street.

One particularly helpful affordance in London were these painted messages telling you which way to look:

There’s also both ways:

I appreciated these messages since they didn’t have to paint them throughout London. But they did and it helped me make sure I was looking the correct way for traffic.

Multiple UIDynamicAnimators

In past apps, I tended to have one UIDynamicAnimator in my ViewController and that was that. UIDynamicAnimator allows you to use UIDynamics / effects on your UIViews.

The issue that I ran into was that removeBehavior(_:), which “Removes a specified dynamic behavior from a dynamic animator“, didn’t seem to work. I would keep track of specific UIDynamicBehavior instances and pass them as the argument for removeBehavior(_:) but it didn’t appear to remove the behavior.

What does work is calling removeAllBehaviors() on the UIDynamicAnimator. This is fine if you only have one UIView. But most likely, you have multiple UIViews & behaviors. Calling remove all on the only animator isn’t a good idea. That could leave UIViews frozen out of place.

Recently, I released a fun weekend app, Fun Faces. Browsing stack overflow, it occurred to me to use multiple UIDynamicAnimators. One for each UIView I wanted to animate. This worked for my use case, where calling removeAllBehaviors() doesn’t interrupt the other UIView’s behaviors (if any).

Using multiple UIDynamicAnimators isn’t an answer if you have multiple UIViews under the same animator with UICollisionBehavior or other effects that let the UIViews interact with each other.

Using CoreMotion deviceMotion to keep image level example (Xcode 8.3, Swift 3.1)

I’ve been playing around with CoreMotion since it is frankly so cool. I’ve followed NSHipster’s CMDevice​Motion post, but I made some changes to use the latest Swift v3.1. Below is sample code for using both the gyroscope and accelerometer to keep an image level when you rotate your phone.

//
//  ViewController.swift
//
//  Created by Rex on 4/22/17.
//

import UIKit
import CoreMotion

class ViewController: UIViewController {

    let interval = 0.01
    let imageFilename = "bg.jpg"
    let imageWidth = CGFloat(800)
    let imageHeight = CGFloat(1200)
    
    let manager = CMMotionManager()
    var imageView: UIImageView?

    override func viewDidLoad() {
        super.viewDidLoad()

        guard manager.isDeviceMotionAvailable else { return }
        
        setImageView()
        
        manager.deviceMotionUpdateInterval = interval
        let queue = OperationQueue()
        
        manager.startDeviceMotionUpdates(to: queue, withHandler: {(data, error) in
            guard let data = data else { return }
            let gravity = data.gravity
            let rotation = atan2(gravity.x, gravity.y) - .pi

            OperationQueue.main.addOperation {
                self.imageView?.transform = CGAffineTransform(rotationAngle: CGFloat(rotation))
            }
        })
    }
    
    func setImageView() {
        if let img = UIImage(named: imageFilename) {
            let iv = UIImageView(image: img)

            // center the image
            let x = (self.view.frame.width/2)-(imageWidth/2)
            let y = (self.view.frame.height/2)-(imageHeight/2)
            iv.frame = CGRect(x: x, y: y, width: imageWidth, height: imageHeight)
            
            self.view.addSubview(iv)
            self.imageView = iv
        }
    }
    
}

The setup is simple. Create a new Single View Application project in Xcode. You’ll need to add a JPG to the Assets.xcassets folder in the project. Replace the Viewcontroller with the code below and make sure to update the image filename, width, and height constants.

The code hopefully is straightforward. We make sure the CMMotionManager’s device motion is available. Then, we add the imageview (as the only UIView element we’re adding to the screen). We use an OperationQueue to process the rotation calculation off the main queue. Then we update the imageview with a transform on the main queue.

App Strategy

On the subject of doing app planning & strategy, I recently came across this post from Rob Caraway: http://robcaraway.com/blog/index.php/2017/02/12/how-i-overcame-crippling-perfectionism-and-made-200k-on-the-saturated-app-store/

Parts of it really resonated with me. He says:

Our strategy was basically “Let’s brainstorm ideas and ship massive features and hope people want them”.

That has been my naive strategy so far. Acting as my own ideal user.

Then he talks about validating a MVP:

  • using “Traffic, as indicated by Google Trends”
  • a landing page to capture e-mails
  • building a prototype in a week
  • validating the demand for the prototype

This all seems standard or obvious when you look at it. But I can say that in reality, I have various app ideas that I think are worth making. When it comes to pick the next one, my current process might as well be rolling dice with bad odds. It’s 1000% obvious, but building a neat app with good UX in 2017 doesn’t count for much. Having a solid marketing strategy in a validated niche is significantly more important than building the best app ever.

I’m currently at a point where I’ve released 3 iOS apps. One of them has done decently and the other two are not. I have to make a decision between prioritizing developing new features for my current apps or creating a new app. For the sake of learning new iOS tools (like the camera), it’s probably better for me to work on a new app. Hopefully I can properly validate my idea before I spend months building it this time.

DSLRs

With the way the world is going, it’s too convenient to shoot photos on your smartphone. Charging your DSLR battery pack(s), making sure the memory cards are cleared, and lugging around a backpack full of lenses is a lot of work.

I have an old Canon 40D camera with both EF (full frame / crop) & EF-S (crop only) lenses. I’ll be traveling in a few months and I want something nicer than my iPhone for taking photos.

It seems like the two main options are: get a crop DSLR body or get a full frame DSLR body. Staying within Canon’s ecosystem would be the most convenient. Leaving Canon opens up a can of worms (Nikon, Sony, Pentax, etc?).

If I were to just suck it up, it seems like the answer would be to get a new full frame DSLR & L glass (24-70 EF lens). But I’m leaning towards getting some relatively cheap new crop DSLR body and just make do with what I have (as an economical choice).

With a camera, I care about low light sensitivity (ISO grain) and maybe shutter speed. I don’t care for video options as I don’t intend on shooting and editing movies.

Even though the standalone camera market seems to be dwindling, the big lenses & big image sensors of DSLRs will always provide photography that mobile phones cannot.