We are once again getting close to the time for Apple’s World Wide Developers conference, and like every year, the press is busy trying to predict and preview what is ahead for the first week in June. Once again, they will completely miss the point or get it totally wrong.
The WWDC is a place to announce things to developers, not consumers. The keynote was once the keynote for developers conference. Now it’s a smokescreen for the rest of the conference, the consumers peek into a conference that is otherwise beyond them. Last year’s WWDC keynotes had the truly good stuff showing up in the State of the Platforms address two hours after the keynote ends. Why the press gets everything wrong and why the Keynote become a smokescreen is the same reason: their audience is not developers, but consumers.
This is not an Apple only problem. Missing the number of Kotlin-based APIs at Google I/O and concentrating on AI is a similar mistake. In the Android world, there seems to be a huge trend, if not a stampede, to dump Java for Kotlin. While a change that will rock the Android market, almost no one is writing about such a technical issue.
As a developer and as an author of iOS courses for Lynda.com and LinkedIn Learning, I see trends in Apple, and I see some gaps in products and APIs that need to be filled. Over the years, I’ve noticed some trends. You’ll better predict what Apple is up to with these on your mind:
With those five in mind. I heard some rumors that sound very sensible and follow a track I’ve already seen in other places. My own development and what I’ve read in hints from Apple also gives me some ideas. While I’m sure we’ll get the new versions of macOS, iOS,watchOS, and tvOS, what intrigues me is what’s under the hood, and what that really says about Apple.
The WWDC schedule found in the WWDC app before the Keynote is usually very enigmatic, with all course names hidden under cute phrases. This year, those cute phrases contain emoji and then some cute saying such as “This will put a smile on your face” with a smiling emoji or a bee and “Here’s one that’s will generate buzz.” Usually, that’s 100% of the lecture sessions.
What different this year is several session are already listed, and they do have a theme: Getting developers to optimize and design better, faster and more secure apps. From how to handle security checking of incoming data to the varieties of interactions on a button to best Practices for TextKit the known sessions all pull towards this theme.
Even in the obscured sessions, we can see another optimization Apple is pushing. Many entries are in Chinese, Japanese, Korean, French, and What I think is German and Portuguese. The first session on Tuesday when the individual sessions get going is “Creating App for a global Audience” We will probably see a lot more of Apple outside the U.S. The company is promoting their International developers and promoting U.S. developers to build for an international market, something I’ll admit I’ve bee reluctant to do in my own apps.
Apple in previous years didn’t push optimization from developers as much as I feel they are this year even before the WWDC begins. Usually, they optimize their hardware for developers to be lazy about such things on it. But courses on optimizing images point to Apple beginning to put their foot down on developers to do their part of building good apps.
Last year for Swift was momentous in a way only developers could appreciate: all but one of the Swift changes were additive, not requiring huge changes in code to implement. Much of what is out there for changes is fine detail stuff at this point. While the language is as stable as a programming language gets, there are Foundation and UIKit improvements that still need some work. One I see that has to happen is better bridging between NSRange and Range. While both describe ranges, they do so in different ways. NSRange is based on an origin point and distance. Range and start and end point. Reconciling these two this is a challenge, but one that will make huge differences in code and code production, especially on text processing. We’ll see more of those kinds of tweaks, and not anything revolutionary.
With the introduction of the generation 6 iPads, the Apple Pencil becomes a more popular option. Under the hood, the Apple Pencil is an excellent example of one of Apple’s guiding principles: make their hardware so simple to use in code, developers can’t avoid using it – and selling Apple pencils in the process. As you can see in my iOS Developer tips Weekly video on the Apple Pencil, everything you do with your finger the Pencil does using the existing code for touches. The Apple Pencil has three more properties for force and angles that you don’t get with touch, though it reads them the same as a touch. This is too simple an API not to have a big payoff.
Given how Apple borrows tech from itself, we could see the next generation of phones with Pencil support. That might seem a bit odd at first but there are a few things that one must consider: Apple’s approach to AI seems to not make it as much the flashy feature but what can help the user the most.
As recent events with talking tubes have pointed out, Privacy and security are difficult with such AI-based technologies. Talking to an AI is only good for situations you really don’t care about privacy, or cannot use any other form of input, such as while driving.
Outside of fingerprints and facial recognition, the other very personalized identification used for centuries is handwriting. Handwriting recognition would allow the user to identify themselves by handwriting, and move from their handwriting to text quickly. Forgers need time and visual cues to forge handwriting, both things which can be made very difficult in electronic writing with invisible writing and timed writing times. Beyond security, Not everyone types quickly, or can every idea be typed. Some languages, like Chinese or Japanese, are far better drawn than typed. Besides the pattern matching of machine learning, an accurate, fast stylus is necessary for good handwriting recognition, which arguably hasn’t existed before the Apple Pencil.
Apple may bring us back to the days of Newton and Palm, this time with our own natural handwriting. They had some of this in betas on notes last year, but I’d expect this to be one trend watch for.
As the author of the Learning Swift Playgrounds Application Development course, it is easy to see I’m a bit biased towards Swift Playgrounds. They are one of the critical parts of my workflow, in Xcode and even more on iPad.
Swift Playgrounds for iPad has two major uses: As an app to teach basic coding and as an app to prototype applications. As the iPad app uses the native hardware, it often is faster and the only way to quickly try out some API’s you can’t do in simulators.
It has been clear since I first put a beta version of Playgrounds on my iPad Pro two years ago that Swift Playgrounds for iPad is a gateway drug to hook kids into Swift and the Apple ecosystem. The education keynote a few months ago did nothing to remove that idea. The question arises what to do after adventures with Byte and the other templates. Apple has so far added third party templates through a rather difficult and confusing set of files, property lists, and then manually add them to a subscription link. While my course includes how to do this, I will be the first to admit it is not easy and requires Xcode to do a decent job. Many educators do not have the time, resources, or patience to build lessons from code.
iPad Playgrounds for its educational role needs an authoring system that makes the process of course construction a lot easier than it is now. It looks like we’re going to get it — maybe. The Course description for Create your Own Swift Playgrounds Subscription (Friday, June 8 11:00) mentions creating your own content using a new template. How good this template is compared to the starter file for easy creation of content is yet to be seen.
While I’ll play with Byte in the Learn to Code series for a fun puzzle, it isn’t the primary use for iPad playgrounds for me. I do most of my research, writing, and prototyping on my iPad or iPad Pro. I see two improvements necessary to make iPad playgrounds a full prototyping environment: a console view and write access to the sources folder. While playgrounds have the live view and you can set viewers, there some prototyping that just works better with an old-fashioned console and a simple print statement. Working with any collection type is still very difficult with viewers. Even more importantly, instead of the completely worthless “There’s something wrong with your code” message, you get a full error message in the console. I’d settle for a template with a console but integrating a console like the Xcode playground would be ideal, both for prototyping and advanced education.
The source folders are one of those very little known features of Swift playgrounds. You can precompile classes before running your playground, and use those classes in your code. Everything has to be public, but it makes for smaller cleaner playgrounds one you’ve finished one part of your prototyping, but want to build on it. Playgrounds in both Xcode and iPad need a better way of converting a current playground into the source folder.
One rumor that the press will cover is a watchOS one. I’ve seen few articles on custom watch faces in watchOS5. I could see this as a possibility. If you look at an Apple watch, there are two watch types: digital and analog. Digital is text, not far off from the WKInterfaceDate already usable within apps. Analog is the more interesting case, as it contains several components: the hour, second, and minute hands, the background, and the watch numbering ticks. All of this could easily be made a class or delegate in Watchkit, so technically it’s not a big leap to releasing that API, which probably exists already internal to Apple. How to deliver it is another question.
I don’t expect all this to be available to the end user to customize their watch. My guess is watch faces will be a lot like message stickers. Developers will make them in Xcode, setting those components I mentioned above, then sell them in the app store. The reason I don’t expect users to get there hands on that level of customization is twofold: one is watch faces can be another income stream. The other is dealing with adding watch complications on customized faces, and that’s beyond user ability to deal with.
I could list my usual list of things I’d like to see, such as iOS on AppleTV’s. Most of those I’m not hopeful for, and if you want to know what they are, go back and read my previous years’ post on them. I’m not expecting to be surprised by much, but Apple always finds a surprise or two no one ever thought was coming. Apple capitulated with the press to make the Keynote more hardware than software over the last few years, and it does make it hard for a developer to watch. This is supposed to be our show. As annual as it for me to want an AppleTV as an iPad on a larger screen, so too will the press be tossed a few hardware items at the keynote, get a little excited, then claim Apple is doomed since there is no new innovation, meanwhile missing all that stuff going on under the hood.