We have Music, Sound, UI but no Dependency Injection

So another night coding Hack24 fueled by sugar and Redbull (well cheapo versions). I wanted to try and get Dagger2 working on my project so I could do Dependency Injection which is something I am interested in learning.   I timed boxed it to 1 hour and failed 🙁 This was due mainly to not understanding how it works with Kotlin. This project has a lot of moving parts to make my life more difficult, if it was a straight forward Android project, life would be fairly simple, however, it uses the LibGDX game engine which is cross-platform and normally in Java, and I am writing it in Kotlin. There seems to be not a single example anywhere of doing these 2 things together. My issue at the moment is with Gradle and the mass of gradle.build files that LibGDX has.

On the positive side, I did add a sound manager that does sound effects and music (quick win), I skinned the buttons, added the Hack24 logo and started building the factory methods to build the buildings in the game!

My focus is to build a minimal viable product of Hack24 before Christmas and release it on a single platform (Probably Android) as a soft launch.

Main areas I need to look at still:

  • Networking and the server side
  • UI for hacking a building
  • The player logon model, e.g Apple Game Center, Google Play etc

Hack24 update

So progress on Hack24 hit a few walls as shown below but it is moving forward 🙂 Don’t worry about the nasty textures, they will be going.

Below is it running on OSX, we have spawning of buildings, and collision.

Lesson Learned : Sometimes pretty code is slower code!

Interesting lesson learned: I had written my own 3D engine for Hack24, however came across this book https://www.packtpub.com/game-development/building-3d-game-libgdx, the code was extremely well structured and I was about to bin my own framework and use this one as it was a bit further along.

I thought I would run a few performance tests, just to be on the safe side. Glad I did:
Theirs: 400 objects rendered: 22FPS
Mine: 4608 objects rendered: 15FPS

Hack24 V2 started :)

So, it may end up being called something else but I have now started writing a new (well kinda) game.  The last couple of weeks I have been bug fixing and improving the LibGDX framework / demo app that I made a few years ago.  Code can be found here: https://github.com/burf2000/BurfEngine

Yes, it has a terrible name!  Anyway it was my attempt at a simple game like Minecraft where you can place and remove cubes.  I added things like chunking, gravity, and culling.  It also has a database and network layer, custom collision code and works on iOS, Android, and Desktop.  It has been converted to Kotlin, which I am really liking 🙂

Now, after a few long nights of fixing, that code is being parked and anyone can use it!  I now plan to rip it apart and use the best bits to form the engine for Hack24 (v2).  Made aim is to make a 3D game that has the same look and feel as Hack24 that’s cross-platform and easy to develop further.

I will focus on a MVP first, which should not take too long, I will try and rewrite the server in NodeJS (Well KotlinJS).

So why am I doing this?  A few reasons:

I really liked Hack24 but it had some performance issues and really needs rewriting to use OpenGL VBO / VAO.  It also makes sense to make it cross-platform while I am at it.  I don’t fancy doing it in Unity and really want to learn Kotlin for work (which I can for LibGDX).  It would be nice if people enjoy playing it too, so I hope to make it bigger with more content 🙂

Watch this space!

 

 

iOSDevUK 2017

iOSDevUK is a conference for iOS developers that takes place at the Aberystwyth university campus in Wales (You get to be a student again).  It is a 4 day event which features talks on the latest iOS frameworks, best practices and ends with a 10 hour hackathon.  Sadly Andrew (iOS developer from Priority) and I could not attend the Hackathon.

The event was less hands on than previous developer events like iOSConf which was a shame but we still got to learn about some of the latest iOS 11 frameworks like:

ARKit

This is Apple’s augmented reality framework which looks really impressive.  We have been waiting for Apple to do something in the AR space since they brought Metaio in 2015.  With the new iPhones having 2 back facing cameras to allow the device to detect depth, the mapping of virtual objects to real life objects has become very accurate.  In the workshop we saw how to place space invaders ships in the real world.

CoreML

CoreML is Apple’s machine learning framework which allows you to take your algorithms from other platforms and use them on your iOS device.  You can’t actually generate your model on the device but you can import it from many different tools (Caffe, SciKit, Kera) and it will run on the device hardware accelerated. The main aim of this talk was to clarify what CoreML’s abilities were as there was a lot of confusion when Apple announced it

FileProviders

So with iOS 11, you’re going to get a file system, like you do in Windows etc.  This workshop showed how you could make your own cloud service like Dropbox and integrate it into iOS11.  This was one of the only talks in ObjectiveC rather than Swift.

ServerSide Swift

This talk went over 3 apps that the company had made using ServerSide Swift which included a SlackBot, a CI tool and an Alexa tool, however the presenter did say that server side swift was far from production ready which was a little bit of a let down.  They suggested the best way to get started with it was to use Docker.

There were a few talks on Design patterns and where to use which ones like VIPER, MVVM, MVP etc.  VIPER seems to becoming popular if your iOS app is very big.  

There were a few talks on the whole pipeline of testing and releasing builds using Unittest, UI test, Jenkin’s, Fastlane and GitHub issues however it was only a overview and not actually how to go about setting it up

 

 

 

WWDC 2017

This years WWDC saw lots of new hardware being released but not too much cool new features mentioned. This definitely felt the year that Apple played catch up to Amazon, Google and Samsung.

Safety

2 Features that stood out to me, Emergency SOS and Do Not Disturb While Driving. The first allows you to setup an “Auto Call” to the emergency services if the Sleep/Wake button is pressed five times.  The 2nd feature will block all notifications, texts and phone calls if the phone detects your driving.  Of course you can turn it off if your friend is driving!  I think features like this really should get more attention because they can save lives!

Business Chat

This was not mentioned on the main WWDC keynote but thanks to Sean Antony from Digital Products, it was brought to my attention.  This is actually a big deal.  The idea is that you can just message a business like you would a friend instead of phoning them up, being on hold, listening to terrible music etc.  You can even purchase goods directly in the chat.

HomePod

In simple terms this is Apple’s answer to Google Home and Amazon’s Echo however it’s being marketed as a smart speaker rather than a smart assistant.  It designed to deliver amazing audio quality and uses spatial awareness to sense its location in a room and automatically adjust the audio, however it costs over twice as much as an Amazon Echo or Google Home Device and requires a Apple Music subscription (sorry Spotify fans).  That being said, Apple fans will buy it, and don’t be surprised if high numbers are sold.

Hardware updates

New Macbook, iMac and iPad pros all got announced, the one that caught my eye was the new iMac pro, the most powerful machine they have every made, starting at $5000, it should be!  I am an Apple fan however you can buy a lot for $5000.  The new iPad pros’ support brighter screens , the 9.7” one has put on some weight and is now 10.5” and both offers screen refresh rates of up to 120Hz for better responsiveness smoother motion

MacOS High Sierra

Apple’s new operating system is called High Sierra, I am not sure much effort went into the name (previous version is Sierra).  It will finally allows Mac users to experience Virtual Reality (assuming you will need a new Mac) and supports a new file system called Apple File System.  Safari will now block autoplaying videos and keep advertisers from tracking Mac users.

Apple TV / Apple Watch

Very minor updates here, Apple TV can act as a HomeKit Speaker and is getting Amazon Prime (Whoop Whoop), Apple Watch got some new fitness stuff, Core Bluetooth support, Toy Story clock faces and a watch face that uses Siri to offer up dynamic suggestions that change based on user preference and time of day.

iOS 11 “The Biggest iPad release ever”

This is what we are truly here for!  So iOS 11 for iPad looks to really improve the multitasking ability on a iPad, it allows you to drag and drop things and give you a Mac like dock at the bottom of the screen to.  The demo they gave did get me wanting a new iPad.  They have also introduced a new Files app which is just their iPad version of Finder / Windows Explorer.

Other new features, Apple Pay now supports person-to-person payments via iMessage, a new framework called ARKit which will make making Augmented Reality apps a lot easier and a new machine learning framework called MLKit which will allow you to take your complex algorithms from other systems and use them on your phone GPU.  This will pave the way to more intelligent apps.

Siri is also getting smarter via deep learning and will soon suggest content to its users based on their Safari searches 🙂  iOS 11 also knows when you use an app and so now has a setting to offload unused apps, this is a great feature to save space on your device.

Swift Playgrounds

This is a iPad app that allows you to learn to code in Swift.  Since its been on the AppStore (just under a year), over 1 million people are using it to learn Swift.  Version 1.5 has a host of new features including a framework to talk to robots, drones and robotics kits like the LEGO Mindstorms kit, the Parrot Drone or Dash Robot.  

 

Google IO 2017

Google IO 2017

Google IO is Google’s annual developer conference held in San Francisco. This year I attended Google IO Extended which happens all around the world at the same time as the main IO event, it’s designed for people who can’t make it to the main event but want to know the latest stuff.

There was one main theme this year from Google, and it’s summed up in this phrase:

“Mobile first to AI first”

In every area that Google spoke about (from new processing hardware, home automation to Android devices) everything had been improved by AI!

Another nice fact they mentioned was, Android now runs on over 2 billion devices and 82 billion apps were installed last year.

Below are some of the big headlines!

Google Lens

A new app designed for your phone, point it at something, be it a flower, restaurant sign or a WIFI label and it will understand it, identify the flower, show the menu for the restaurant or automatically join the WIFI! It can also translate languages on signs.

They also showed a cool demo where the AI could detect obstructions (a wire fence) and remove it from the picture). This is a huge leap in computer vision.

Google Home

Google home seems to do a lot more than I realise, for instance, it can recognition up to 6 different people in a household and customise the experience for each one. Now, Google is adding phone calling to Google Home for free. Only available in the US currently, you can just ask Google Home to phone your mum for instance and will recognise who you are, and find your mum in your contacts. If your partner does the same thing, it will phone their mum, not yours.

Another new feature is visual responses, which is super cool. You can ask Google Home something, say “what’s my calendar look like today”, and Google will display it on a Smart TV, Chromecast or Google connected device. I really think this will become super useful. You could ask Google Home, how long it will take to get to somewhere, then tell it to send directions to your phone.

They also introduced something called Proactive Assistance, the idea is that Google Home will detect things that may be important to you and let you know about them via a visual light on the device, for example, if traffic is really bad and you have a meeting coming up soon.

Google home now integrates with over 70 smart home manufactures

Virtual Reality

Google already make a VR framework (Daydream) and a headset to fit onto your phone, this year Google announced 2 stand alone (no phone, pc etc needed) VR headset coming out this year and have partnered with HTC (who make the HTC Vive VR headset) and Lenovo who make their project Tango tablet (3D mapping / AR). What’s very interesting here is that they are bringing out their own indoor tracking solution that does not need external sensors. They call it VPS (visual positioning system) which I believe could be an advanced version of SLAM.

They also announced that the new Samsung S8 will support the normal Daydream VR headset, which I found odd as Samsung are in partnership with Oculus (Facebook, direct rivals with Vive) and already have the GearVR.

Augmented Reality

Google announced another Tango handset (it’s like a Microsoft Kinect embed into an android tablet) and announced Expedition, which brings AR to the classroom. Kids will be able to place 3D augmented objects within the classroom, for example see how volcanoes explode.

Suggested Sharing

Suggested sharing is a new feature for Google Photos that uses AI to detect well-taken pictures, and who is in them. It then suggests / reminds you share that picture with the people in it. It forms an online collection of all the images, so you finally get to see images with you actually in them (if someone else took them). There is also an automatic mode, for example if you always want to share pictures of your kids with your partner. Feels a little scary to me.

Cloud TPU’s

So, anyone in computing will know what a CPU (central processing unit) and a GPU (Graphics Processing Unit) is. Google likes to do their own thing and last year announced the TPU (Tensor processing units) which are designed to be very quick at machine learning processes. Google are now calling them Cloud TPU’s and each one can do 180 teraflops.

Android O

There were a few new features mentioned in the keynote but nothing I found too exciting. They mentioned picture in picture, and notification dots, both of which iOS already have. They mentioned Android Studio 3 and supporting kotlin as a first class language, again, I guess it’s their answer to Swift for iOS. There was the usual focus on battery useage, security (Google Play Protect) and making apps run boot faster. They say they have seen 2x improvements on apps running. Google has also improved Copy and Paste features so that it automatically recognises address, company names, phone number etc which in all honesty I thought it already did.

iOS Support

Throughout the presentation, whatever new stuff they demo’d they kept making a point that it’s also supported on iOS, not just Android (Google Assistant, Google Photos, Daydream etc) which I personally thought was cool.

Lastly and probably the one that made me laugh the most!

YouTube

Youtube for TV and consoles will now support 360 video including live events, Youtube viewing on TV has gone up by 60%. However, the big news is Super Chat and Trigger Actions.  

Super Chat allows you to pay for your comment (to a Live Youtuber) to be noticed, so if you really want to ask that question, you can pay for it. Not too bad, I guess. But Trigger Actions allow you to pay to trigger something in the live video, so throwing a water bomb at the presenter or turning the lights off in their house. I can see this going down hill pretty fast.

VEX Worlds 2017 – Robotics Competition

Sorry for the late post about VEX Worlds, I thought I would have more time after worlds to catch up with stuff, sadly (well not really), the kids have been mega active.  My eldest son played his first football tournament, had a holiday, lots of family stuff!

So, VEX Worlds, what an amazing experience, I went along for the VEX EDR side of the competition (this year it was split EDR / IQ) as I was showing off the EDR Tank.  Sadly I had to leave the US early as my son, Max was ill.  Still a very cool experience!

So, the EDR tank, well it performed really really well in remote control mode.  I mean the thing was fairly slow but must have covered MILES!  The batteries never died on me, nor did any motors!  I did kill a few Omniwheels, however, that’s to be expected.   Even though I left early, the EDR tank did not and so others drove it around.  I have not received it back yet to see how bad it is now, but I am sure it will be fine.

The autonomous side was a bit of a failure, to be honest, and looking back I had set up myself to fail and I will explain why.  The autonomous side was using ROS (Robotic operating system) which is an industry standard.  I was using a Neato Lidar system which is awesome however it only had a range of 5 meters and SLAM (simultaneous localization and mapping) to work out where I was, and where I needed to go via building up a map.  SLAM works by detecting features of the surrounding area to work out where it is.  When you’re in a hall that’s hundreds of meters wide with very little features, a sensor with a range of 5 meters is practically useless.  In the end, I just showed kids how it worked on my laptop using RVIZ.  If I had to do this properly I would need to invest in a proper LIDAR system with a much greater range.  Another aspect which makes this very hard is all the people moving around, how can SLAM pick up features if they are constantly moving!

Overall, the EDR tank was hugely popular, I gave tons of fist bumps, high fives, etc, people just thought it was cool, just a little slow.

Next year, if I did a vehicle again, I would have to make it a lot faster and forget about advance sensors etc!

Here are some videos of VEX World and the EDR TANK: