Rob Pike’s dream setup

I recently came across a short article on Uses This.

Rob Pike talks about his setup. His response for his dream setup & having stateless devices really intrigued me.

Twenty years ago, you expected a phone to be provided everywhere you went, and that phone worked the same everywhere. At a friend’s house, or a restaurant, or a hotel, or a pay phone, you could pick up the receiver and make a call. You didn’t carry a phone around with you; phones were part of the infrastructure.

– Rob Pike
https://usesthis.com/interviews/rob.pike/

This is hard to imagine today or in the future for computers. A world where you didn’t have to carry around a mobile phone. You simply used the mobile phone wherever you went. For many reasons, what Rob talks about for phones (applied to computers) doesn’t seem likely.

But it’s nice to imagine a world where you didn’t have to update your phone every couple of years. You could rely on a device at home or at work to pick up where you left off, without having to lug something expensive around and keep it charged.

ARKit Impressions

I’ve been working with ARKit recently. I am planning on releasing an AR basketball game when iOS 11 is released.

Here are misc thoughts about working with ARKit:

  • It’s hard to find answers to common questions about doing simple things in ARKit. Searching for SceneKit yields slightly more results, but even that is sparse. The Apple developer SceneKit & ARKit forums don’t appear to have much activity either. So it’s up to StackOverflow & random Internet blog posts
  • Working with ARKit means working with SceneKit. SceneKit is Apple’s framework to make working with 3D assets easier for developers. Working with SceneKit & 3D is something that I’m new to. A lot of the math around position, orientation, euler angles, transforms, etc. can get complex fast when it involves matrix transforms and quaternions.
  • It’s really hard to find assets for DAE/collada. The DAE format is meant to be interchange format for various 3D software to communicate with each other. The reality is that exporting to DAE or converting from one format to DAE is a crapshoot. I’ve used Blender briefly to look at 3d assets, but digging into 3D modeling is a huge time sink for some one looking to get involved in ARKit. I wish there was an online store that focused on selling low poly (<10K), DAE files.
  • Related, working with 3D assets as someone new to 3D assets is very frustrating. The concept of bounds vs scaling as they relate to importing into your SceneKit scene was very challenging (with the 3D model that I imported). If you have your own in-house or contracted 3D modeler, you should get 3D assets that work well with SceneKit, but I had countless issues with off the shelf 3D models & file formats.
  • After you’re able to import your 3D model, modeling the physics geometry can be a challenge. SceneKit allows you to import the geometry for your physics body as-is using ConcavePolyhedron, but you probably don’t want that. I had to manually recreate a basketball hoop using multiple shapes combined into a single SCNNode.
  • ARKit is not all powerful. The main feature that ARKit gives you is horizontal plane detection. Occlusion doesn’t come with ARKit. Expect many apps that deliver an experience reliant on a plane/surface like your desk or the floor.

ARKit is exciting, but don’t expect the world yet. Future ARKit releases & better iOS hardware should provide more compelling experiences. Today, you can expect to play with 3D models on a surface (with surface interaction) or in the air (with limited or no environment interaction).

CLI Cut Visual Option

Something I came across recently was command line text manipulation with a CSV. The way that the list option is passed in is cool.

For demonstration purposed, we have a contrived text document “dummy.txt” that happens to be delimited by the % character. The contents inside the file are:

name%car%temp%color
john%honda%fair%blue
tom%benz%fair%red
ed%bmw%cold%green

To get the first column of data, you can run

cut -d% -f1 dummy.txt

which gives you:

name
john
tom
ed

If you wanted to save the output, the standard command line “>” comes in handy.

To get the columns up to (and including the) 2nd column, you can run

cut -d% -f-2 dummy.txt

which gives you:

name%car
john%honda
tom%benz
ed%bmw

To get the 2nd & 3rd columns, inclusive, you can run

cut -d% -f2-3 dummy.txt

which gives you:

car%temp
honda%fair
benz%fair
bmw%cold

To get the 3rd column onward to the last column, you can run

cut -d% -f3- dummy.txt

which gives you:

temp%color
fair%blue
fair%red
cold%green

The examples above are just for this demo, but I think the hyphen syntax in the list fields option is easy to learn and visually clear (for a CLI interface).

Crosswalk Aides

I recently went on vacation in Europe. When I visit a new place, I try to get a feel for the new environment by walking around everywhere. Things like the OK to cross icon always amuse me since they are different.

London has a ton of history (old buildings), but I found that it exceeded my expectations for modern accessibility. The signage throughout the subway and public areas (train stations, etc) was really easy to follow.

In the UK, cars drive on the left side of the road. This is the opposite from the US. This means people coming from the US have to look on the other side for oncoming traffic while crossing the street.

One particularly helpful affordance in London were these painted messages telling you which way to look:

There’s also both ways:

I appreciated these messages since they didn’t have to paint them throughout London. But they did and it helped me make sure I was looking the correct way for traffic.

Multiple UIDynamicAnimators

In past apps, I tended to have one UIDynamicAnimator in my ViewController and that was that. UIDynamicAnimator allows you to use UIDynamics / effects on your UIViews.

The issue that I ran into was that removeBehavior(_:), which “Removes a specified dynamic behavior from a dynamic animator“, didn’t seem to work. I would keep track of specific UIDynamicBehavior instances and pass them as the argument for removeBehavior(_:) but it didn’t appear to remove the behavior.

What does work is calling removeAllBehaviors() on the UIDynamicAnimator. This is fine if you only have one UIView. But most likely, you have multiple UIViews & behaviors. Calling remove all on the only animator isn’t a good idea. That could leave UIViews frozen out of place.

Recently, I released a fun weekend app, Fun Faces. Browsing stack overflow, it occurred to me to use multiple UIDynamicAnimators. One for each UIView I wanted to animate. This worked for my use case, where calling removeAllBehaviors() doesn’t interrupt the other UIView’s behaviors (if any).

Using multiple UIDynamicAnimators isn’t an answer if you have multiple UIViews under the same animator with UICollisionBehavior or other effects that let the UIViews interact with each other.

Using CoreMotion deviceMotion to keep image level example (Xcode 8.3, Swift 3.1)

I’ve been playing around with CoreMotion since it is frankly so cool. I’ve followed NSHipster’s CMDevice​Motion post, but I made some changes to use the latest Swift v3.1. Below is sample code for using both the gyroscope and accelerometer to keep an image level when you rotate your phone.

//
//  ViewController.swift
//
//  Created by Rex on 4/22/17.
//

import UIKit
import CoreMotion

class ViewController: UIViewController {

    let interval = 0.01
    let imageFilename = "bg.jpg"
    let imageWidth = CGFloat(800)
    let imageHeight = CGFloat(1200)
    
    let manager = CMMotionManager()
    var imageView: UIImageView?

    override func viewDidLoad() {
        super.viewDidLoad()

        guard manager.isDeviceMotionAvailable else { return }
        
        setImageView()
        
        manager.deviceMotionUpdateInterval = interval
        let queue = OperationQueue()
        
        manager.startDeviceMotionUpdates(to: queue, withHandler: {(data, error) in
            guard let data = data else { return }
            let gravity = data.gravity
            let rotation = atan2(gravity.x, gravity.y) - .pi

            OperationQueue.main.addOperation {
                self.imageView?.transform = CGAffineTransform(rotationAngle: CGFloat(rotation))
            }
        })
    }
    
    func setImageView() {
        if let img = UIImage(named: imageFilename) {
            let iv = UIImageView(image: img)

            // center the image
            let x = (self.view.frame.width/2)-(imageWidth/2)
            let y = (self.view.frame.height/2)-(imageHeight/2)
            iv.frame = CGRect(x: x, y: y, width: imageWidth, height: imageHeight)
            
            self.view.addSubview(iv)
            self.imageView = iv
        }
    }
    
}

The setup is simple. Create a new Single View Application project in Xcode. You’ll need to add a JPG to the Assets.xcassets folder in the project. Replace the Viewcontroller with the code below and make sure to update the image filename, width, and height constants.

The code hopefully is straightforward. We make sure the CMMotionManager’s device motion is available. Then, we add the imageview (as the only UIView element we’re adding to the screen). We use an OperationQueue to process the rotation calculation off the main queue. Then we update the imageview with a transform on the main queue.

App Strategy

On the subject of doing app planning & strategy, I recently came across this post from Rob Caraway: http://robcaraway.com/blog/index.php/2017/02/12/how-i-overcame-crippling-perfectionism-and-made-200k-on-the-saturated-app-store/

Parts of it really resonated with me. He says:

Our strategy was basically “Let’s brainstorm ideas and ship massive features and hope people want them”.

That has been my naive strategy so far. Acting as my own ideal user.

Then he talks about validating a MVP:

  • using “Traffic, as indicated by Google Trends”
  • a landing page to capture e-mails
  • building a prototype in a week
  • validating the demand for the prototype

This all seems standard or obvious when you look at it. But I can say that in reality, I have various app ideas that I think are worth making. When it comes to pick the next one, my current process might as well be rolling dice with bad odds. It’s 1000% obvious, but building a neat app with good UX in 2017 doesn’t count for much. Having a solid marketing strategy in a validated niche is significantly more important than building the best app ever.

I’m currently at a point where I’ve released 3 iOS apps. One of them has done decently and the other two are not. I have to make a decision between prioritizing developing new features for my current apps or creating a new app. For the sake of learning new iOS tools (like the camera), it’s probably better for me to work on a new app. Hopefully I can properly validate my idea before I spend months building it this time.

DSLRs

With the way the world is going, it’s too convenient to shoot photos on your smartphone. Charging your DSLR battery pack(s), making sure the memory cards are cleared, and lugging around a backpack full of lenses is a lot of work.

I have an old Canon 40D camera with both EF (full frame / crop) & EF-S (crop only) lenses. I’ll be traveling in a few months and I want something nicer than my iPhone for taking photos.

It seems like the two main options are: get a crop DSLR body or get a full frame DSLR body. Staying within Canon’s ecosystem would be the most convenient. Leaving Canon opens up a can of worms (Nikon, Sony, Pentax, etc?).

If I were to just suck it up, it seems like the answer would be to get a new full frame DSLR & L glass (24-70 EF lens). But I’m leaning towards getting some relatively cheap new crop DSLR body and just make do with what I have (as an economical choice).

With a camera, I care about low light sensitivity (ISO grain) and maybe shutter speed. I don’t care for video options as I don’t intend on shooting and editing movies.

Even though the standalone camera market seems to be dwindling, the big lenses & big image sensors of DSLRs will always provide photography that mobile phones cannot.

iOS 10 Locales and Currency Symbols – Sample App

While working on adding localization to my tip calculator, one thing that seems obvious in retrospect is the difference between a device’s language & region. iOS lets you set the language & region separately. For example, you might want to read text in English, but you could be in Asia. This is relevant to tipping since you could travel to a country where tipping is expected, but the country your phone’s language is associated with doesn’t traditionally tip.

While exploring locales and currency symbols, I whipped together a basic demo app that lets you scroll between all the known locales and their currency symbol in iOS 10. This is pretty useful since you can quickly see what the currencySymbol is for each known iOS locale.

Below is the full implementation of very hacked together (quick and dirty) code. All you need to do:

  • Create new Single View Application project in Xcode
  • Replace the ViewController.swift with below (written for Swift 3)
  • Run the app in Xcode
import UIKit

class ViewController: UIViewController, UITableViewDelegate {
    
    let cellIdentifier = "Cell"
    let currentLocaleHeight = CGFloat(80)
    
    let locales = Locale.availableIdentifiers.sorted { $0.localizedCaseInsensitiveCompare($1) == ComparisonResult.orderedAscending }
    
    override func viewDidLoad() {
        super.viewDidLoad()
        
        let tableView: UITableView = UITableView()
        tableView.frame = CGRect(x: 0, y: currentLocaleHeight, width: view.frame.width, height: view.frame.height)
        tableView.dataSource = self
        tableView.delegate = self
        
        self.view.addSubview(tableView)
        
        addCurrentLocaleLabel()
    }
    
    func addCurrentLocaleLabel() {
        let local = Locale.current.identifier
        
        let width = view.frame.width
        let label = UILabel(frame: CGRect(x: 0, y: 0, width: width, height: currentLocaleHeight))
        label.text = "Current locale: " + local
        label.textAlignment = .center
        view.addSubview(label)
    }
    
}

extension ViewController: UITableViewDataSource {
    
    func numberOfSections(in tableView: UITableView) -> Int {
        return 1
    }
    
    func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
        return locales.count
    }
    
    func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
        let cell = UITableViewCell(style: .value1, reuseIdentifier: cellIdentifier)
        
        let localeString = locales[indexPath.row]
        
        let numberFormatter = NumberFormatter()
        numberFormatter.locale = Locale(identifier: localeString)
        
        cell.textLabel?.text = localeString
        cell.detailTextLabel?.text = numberFormatter.currencySymbol
        
        return cell
    }

}

Note that the “¤” symbol means the currency is unspecified.

Using Fastlane Snapshot to generate screenshots with UIPickerViews

This week, I released an update for my Tip Solver calculator to add Chinese localization. I had to generate 5 screenshots for 5 devices (iPhone and iPad) across 3 languages. In the time that I spent automating the process with Fastlane Snapshot, I could have easily done it manually in way less time. But the good news is that I’ve set myself up to painlessly generate screenshots for new languages. Snapshot takes some time to run, but it’s still a huge improvement over generating screenshots manually.

It took me a lot longer than I would have liked to setup my Snapshot process due to my usage of UIPickerViews. Tip Solver makes heavy usage of UIPickerView and I ran into many issues with UITest.

Your mileage may vary, but I found I had to do the following to be able to use UITest and UIPickerViews:

  • disable Ads (which run over the network)
  • drastically reduce the number of UIPickerView rows (in numberOfRowsInComponent)
  • use titleForRow instead of viewForRow for UITest running

The last one (using titleForRow) was a complete non starter since I rely on heavy UIPickerView visual customization. Generating screenshots with incorrect picker views defeats the whole point of the exercise.

I tried using the Xcode’s UITest recorder, but I ran into many issues. One glaring issue is that while recording, I was able to swipe the UIPicker up, but when I played it back, it ended up swiping up the Control Center (instead of adjusting the UIPicker). There is a method (adjustToPickerWheelValue), but I found that it only works with titleForRow (which I don’t use). What I would like is an expansion of the XCUIElement API to add a simple increment/move up or down once.

My final solution (aka work around) was to use a combination of Fastlane launch arguments & brute forcing the UIView (via UIViewController viewDidAppear) to generate my screenshots. My work around isn’t ideal, but it gets the job done.

In my Fastlane Snapfile, I was able to define arguments:

launch_arguments([
 "-screenshot 1",
 "-screenshot 2",
 "-screenshot 3",
 "-screenshot 4",
 "-screenshot 5"
])

In my ViewController (running Swift 3), I was able to handle them accordingly:

let screenshot = UserDefaults.standard.string(forKey: "screenshot")
if screenshot == "1" {
    // do something
} else if screenshot == "2" {
    // do something
} else if screenshot == "3" {
    // do something
} else if screenshot == "4" {
    // do something
} else if screenshot == "5" {
    // do something
}

Once everything is setup, generating screenshots was simply running snapshot on the command line.

I’m sure there’s room for improvement in the code (using an enum, etc.), but I left it at that since it’s only for screenshot generation.

If you’ve made it all the way down here, thanks for reading. I just wanted to share my experience with UITest and UIPickers. UITest probably needs more love from Apple as it was not pleasant to work with.