So progress on Hack24 hit a few walls as shown below but it is moving forward 🙂 Don’t worry about the nasty textures, they will be going.
Below is it running on OSX, we have spawning of buildings, and collision.
So progress on Hack24 hit a few walls as shown below but it is moving forward 🙂 Don’t worry about the nasty textures, they will be going.
Below is it running on OSX, we have spawning of buildings, and collision.
So, it may end up being called something else but I have now started writing a new (well kinda) game. The last couple of weeks I have been bug fixing and improving the LibGDX framework / demo app that I made a few years ago. Code can be found here: https://github.com/burf2000/BurfEngine
Yes, it has a terrible name! Anyway it was my attempt at a simple game like Minecraft where you can place and remove cubes. I added things like chunking, gravity, and culling. It also has a database and network layer, custom collision code and works on iOS, Android, and Desktop. It has been converted to Kotlin, which I am really liking 🙂
Now, after a few long nights of fixing, that code is being parked and anyone can use it! I now plan to rip it apart and use the best bits to form the engine for Hack24 (v2). Made aim is to make a 3D game that has the same look and feel as Hack24 that’s cross-platform and easy to develop further.
I will focus on a MVP first, which should not take too long, I will try and rewrite the server in NodeJS (Well KotlinJS).
So why am I doing this? A few reasons:
I really liked Hack24 but it had some performance issues and really needs rewriting to use OpenGL VBO / VAO. It also makes sense to make it cross-platform while I am at it. I don’t fancy doing it in Unity and really want to learn Kotlin for work (which I can for LibGDX). It would be nice if people enjoy playing it too, so I hope to make it bigger with more content 🙂
Watch this space!
In summary, I have abs and so its time to fix the brain with some development 🙂
iOSDevUK is a conference for iOS developers that takes place at the Aberystwyth university campus in Wales (You get to be a student again). It is a 4 day event which features talks on the latest iOS frameworks, best practices and ends with a 10 hour hackathon. Sadly Andrew (iOS developer from Priority) and I could not attend the Hackathon.
The event was less hands on than previous developer events like iOSConf which was a shame but we still got to learn about some of the latest iOS 11 frameworks like:
This is Apple’s augmented reality framework which looks really impressive. We have been waiting for Apple to do something in the AR space since they brought Metaio in 2015. With the new iPhones having 2 back facing cameras to allow the device to detect depth, the mapping of virtual objects to real life objects has become very accurate. In the workshop we saw how to place space invaders ships in the real world.
CoreML is Apple’s machine learning framework which allows you to take your algorithms from other platforms and use them on your iOS device. You can’t actually generate your model on the device but you can import it from many different tools (Caffe, SciKit, Kera) and it will run on the device hardware accelerated. The main aim of this talk was to clarify what CoreML’s abilities were as there was a lot of confusion when Apple announced it
So with iOS 11, you’re going to get a file system, like you do in Windows etc. This workshop showed how you could make your own cloud service like Dropbox and integrate it into iOS11. This was one of the only talks in ObjectiveC rather than Swift.
This talk went over 3 apps that the company had made using ServerSide Swift which included a SlackBot, a CI tool and an Alexa tool, however the presenter did say that server side swift was far from production ready which was a little bit of a let down. They suggested the best way to get started with it was to use Docker.
There were a few talks on Design patterns and where to use which ones like VIPER, MVVM, MVP etc. VIPER seems to becoming popular if your iOS app is very big.
There were a few talks on the whole pipeline of testing and releasing builds using Unittest, UI test, Jenkin’s, Fastlane and GitHub issues however it was only a overview and not actually how to go about setting it up
This years WWDC saw lots of new hardware being released but not too much cool new features mentioned. This definitely felt the year that Apple played catch up to Amazon, Google and Samsung.
2 Features that stood out to me, Emergency SOS and Do Not Disturb While Driving. The first allows you to setup an “Auto Call” to the emergency services if the Sleep/Wake button is pressed five times. The 2nd feature will block all notifications, texts and phone calls if the phone detects your driving. Of course you can turn it off if your friend is driving! I think features like this really should get more attention because they can save lives!
This was not mentioned on the main WWDC keynote but thanks to Sean Antony from Digital Products, it was brought to my attention. This is actually a big deal. The idea is that you can just message a business like you would a friend instead of phoning them up, being on hold, listening to terrible music etc. You can even purchase goods directly in the chat.
In simple terms this is Apple’s answer to Google Home and Amazon’s Echo however it’s being marketed as a smart speaker rather than a smart assistant. It designed to deliver amazing audio quality and uses spatial awareness to sense its location in a room and automatically adjust the audio, however it costs over twice as much as an Amazon Echo or Google Home Device and requires a Apple Music subscription (sorry Spotify fans). That being said, Apple fans will buy it, and don’t be surprised if high numbers are sold.
New Macbook, iMac and iPad pros all got announced, the one that caught my eye was the new iMac pro, the most powerful machine they have every made, starting at $5000, it should be! I am an Apple fan however you can buy a lot for $5000. The new iPad pros’ support brighter screens , the 9.7” one has put on some weight and is now 10.5” and both offers screen refresh rates of up to 120Hz for better responsiveness smoother motion
MacOS High Sierra
Apple’s new operating system is called High Sierra, I am not sure much effort went into the name (previous version is Sierra). It will finally allows Mac users to experience Virtual Reality (assuming you will need a new Mac) and supports a new file system called Apple File System. Safari will now block autoplaying videos and keep advertisers from tracking Mac users.
Apple TV / Apple Watch
Very minor updates here, Apple TV can act as a HomeKit Speaker and is getting Amazon Prime (Whoop Whoop), Apple Watch got some new fitness stuff, Core Bluetooth support, Toy Story clock faces and a watch face that uses Siri to offer up dynamic suggestions that change based on user preference and time of day.
iOS 11 “The Biggest iPad release ever”
This is what we are truly here for! So iOS 11 for iPad looks to really improve the multitasking ability on a iPad, it allows you to drag and drop things and give you a Mac like dock at the bottom of the screen to. The demo they gave did get me wanting a new iPad. They have also introduced a new Files app which is just their iPad version of Finder / Windows Explorer.
Other new features, Apple Pay now supports person-to-person payments via iMessage, a new framework called ARKit which will make making Augmented Reality apps a lot easier and a new machine learning framework called MLKit which will allow you to take your complex algorithms from other systems and use them on your phone GPU. This will pave the way to more intelligent apps.
Siri is also getting smarter via deep learning and will soon suggest content to its users based on their Safari searches 🙂 iOS 11 also knows when you use an app and so now has a setting to offload unused apps, this is a great feature to save space on your device.
This is a iPad app that allows you to learn to code in Swift. Since its been on the AppStore (just under a year), over 1 million people are using it to learn Swift. Version 1.5 has a host of new features including a framework to talk to robots, drones and robotics kits like the LEGO Mindstorms kit, the Parrot Drone or Dash Robot.
Google IO 2017
Google IO is Google’s annual developer conference held in San Francisco. This year I attended Google IO Extended which happens all around the world at the same time as the main IO event, it’s designed for people who can’t make it to the main event but want to know the latest stuff.
There was one main theme this year from Google, and it’s summed up in this phrase:
“Mobile first to AI first”
In every area that Google spoke about (from new processing hardware, home automation to Android devices) everything had been improved by AI!
Another nice fact they mentioned was, Android now runs on over 2 billion devices and 82 billion apps were installed last year.
Below are some of the big headlines!
A new app designed for your phone, point it at something, be it a flower, restaurant sign or a WIFI label and it will understand it, identify the flower, show the menu for the restaurant or automatically join the WIFI! It can also translate languages on signs.
They also showed a cool demo where the AI could detect obstructions (a wire fence) and remove it from the picture). This is a huge leap in computer vision.
Google home seems to do a lot more than I realise, for instance, it can recognition up to 6 different people in a household and customise the experience for each one. Now, Google is adding phone calling to Google Home for free. Only available in the US currently, you can just ask Google Home to phone your mum for instance and will recognise who you are, and find your mum in your contacts. If your partner does the same thing, it will phone their mum, not yours.
Another new feature is visual responses, which is super cool. You can ask Google Home something, say “what’s my calendar look like today”, and Google will display it on a Smart TV, Chromecast or Google connected device. I really think this will become super useful. You could ask Google Home, how long it will take to get to somewhere, then tell it to send directions to your phone.
They also introduced something called Proactive Assistance, the idea is that Google Home will detect things that may be important to you and let you know about them via a visual light on the device, for example, if traffic is really bad and you have a meeting coming up soon.
Google home now integrates with over 70 smart home manufactures
Google already make a VR framework (Daydream) and a headset to fit onto your phone, this year Google announced 2 stand alone (no phone, pc etc needed) VR headset coming out this year and have partnered with HTC (who make the HTC Vive VR headset) and Lenovo who make their project Tango tablet (3D mapping / AR). What’s very interesting here is that they are bringing out their own indoor tracking solution that does not need external sensors. They call it VPS (visual positioning system) which I believe could be an advanced version of SLAM.
They also announced that the new Samsung S8 will support the normal Daydream VR headset, which I found odd as Samsung are in partnership with Oculus (Facebook, direct rivals with Vive) and already have the GearVR.
Google announced another Tango handset (it’s like a Microsoft Kinect embed into an android tablet) and announced Expedition, which brings AR to the classroom. Kids will be able to place 3D augmented objects within the classroom, for example see how volcanoes explode.
Suggested sharing is a new feature for Google Photos that uses AI to detect well-taken pictures, and who is in them. It then suggests / reminds you share that picture with the people in it. It forms an online collection of all the images, so you finally get to see images with you actually in them (if someone else took them). There is also an automatic mode, for example if you always want to share pictures of your kids with your partner. Feels a little scary to me.
So, anyone in computing will know what a CPU (central processing unit) and a GPU (Graphics Processing Unit) is. Google likes to do their own thing and last year announced the TPU (Tensor processing units) which are designed to be very quick at machine learning processes. Google are now calling them Cloud TPU’s and each one can do 180 teraflops.
There were a few new features mentioned in the keynote but nothing I found too exciting. They mentioned picture in picture, and notification dots, both of which iOS already have. They mentioned Android Studio 3 and supporting kotlin as a first class language, again, I guess it’s their answer to Swift for iOS. There was the usual focus on battery useage, security (Google Play Protect) and making apps run boot faster. They say they have seen 2x improvements on apps running. Google has also improved Copy and Paste features so that it automatically recognises address, company names, phone number etc which in all honesty I thought it already did.
Throughout the presentation, whatever new stuff they demo’d they kept making a point that it’s also supported on iOS, not just Android (Google Assistant, Google Photos, Daydream etc) which I personally thought was cool.
Lastly and probably the one that made me laugh the most!
Youtube for TV and consoles will now support 360 video including live events, Youtube viewing on TV has gone up by 60%. However, the big news is Super Chat and Trigger Actions.
Super Chat allows you to pay for your comment (to a Live Youtuber) to be noticed, so if you really want to ask that question, you can pay for it. Not too bad, I guess. But Trigger Actions allow you to pay to trigger something in the live video, so throwing a water bomb at the presenter or turning the lights off in their house. I can see this going down hill pretty fast.
Sorry for the late post about VEX Worlds, I thought I would have more time after worlds to catch up with stuff, sadly (well not really), the kids have been mega active. My eldest son played his first football tournament, had a holiday, lots of family stuff!
So, VEX Worlds, what an amazing experience, I went along for the VEX EDR side of the competition (this year it was split EDR / IQ) as I was showing off the EDR Tank. Sadly I had to leave the US early as my son, Max was ill. Still a very cool experience!
So, the EDR tank, well it performed really really well in remote control mode. I mean the thing was fairly slow but must have covered MILES! The batteries never died on me, nor did any motors! I did kill a few Omniwheels, however, that’s to be expected. Even though I left early, the EDR tank did not and so others drove it around. I have not received it back yet to see how bad it is now, but I am sure it will be fine.
The autonomous side was a bit of a failure, to be honest, and looking back I had set up myself to fail and I will explain why. The autonomous side was using ROS (Robotic operating system) which is an industry standard. I was using a Neato Lidar system which is awesome however it only had a range of 5 meters and SLAM (simultaneous localization and mapping) to work out where I was, and where I needed to go via building up a map. SLAM works by detecting features of the surrounding area to work out where it is. When you’re in a hall that’s hundreds of meters wide with very little features, a sensor with a range of 5 meters is practically useless. In the end, I just showed kids how it worked on my laptop using RVIZ. If I had to do this properly I would need to invest in a proper LIDAR system with a much greater range. Another aspect which makes this very hard is all the people moving around, how can SLAM pick up features if they are constantly moving!
Overall, the EDR tank was hugely popular, I gave tons of fist bumps, high fives, etc, people just thought it was cool, just a little slow.
Next year, if I did a vehicle again, I would have to make it a lot faster and forget about advance sensors etc!
Here are some videos of VEX World and the EDR TANK:
iOSCon is a 2 conference in London for people interested in iOS development and the Swift programming language. I got the chance to attend with a few people from Digital Products who work on our apps like MyO2, O2 Drive and Priority. I was rather looking forward to going as I have had my feet out of the iOS development circle for a while. Working in the Lab requires you to jump around from technology to technology, each project could be written in a completely different programming language, tool or involve no coding at tall.
The conference covered things like:
What was interesting about this conference compared to others was the focus on behind the scenes stuff. Previous conferences I had attended featured a lot of talks on UI, user experience, building custom controls etc. This conference focused on making your code more stable, structured and easier to test.
You can find most of the talks here for free: https://skillsmatter.com/conferences/8180-ioscon-2017-the-conference-for-ios-and-swift-developers?tc=260f81#skillscasts
Here are 2 talks I really enjoyed:
It’s about time by Daniel Steinberg
This was a rather hard hitting talk about striking the right balance between work and home life. He focuses on things like, either work or relax, don’t try to mix them, don’t go home and think of work. He tries to get you to focus on why you’re doing something, not the what your doing. He also covered planning your day better and how interruptions cost you. Every time someone bothers you for a minute, it takes 23 minutes to recover, even if it’s you who caused the interruption.
If you’re interested in watching the talk, check the link below, warning it may make you rethink things a bit.
The second talk I really enjoyed was
Natural Swift: write Swift the way it was meant to be written by Paul Hudson
The talk focuses on 3 topics which together can really help you improve your code. The first is POP (Protocol oriented Programming), the second is Functional Programming which focused on the map, flatmap, filter and reduce commands. These really impressed me because they can do in 1 line of code, what I would usually do in 5. The last topic he covered was value types, ObjectiveC is very different to Swift and you need to know what is a value type and what’s a reference type.
Sadly this talk was not filmed by Skillscast however you can download it for free from https://gumroad.com/l/natural-swift
So, let’s get the lie out of the way, this week’s update could cover more or less than a week! It is whatever I am thinking of at the time, that may or may not be happening. So apologies for that bombshell.
So, at work (O2’s Innovation Lab) I am currently learning data science stuff, for anyone who knows me, this is an extremely hard task as I have the focus of hamster on Redbull. I am usually doing more than 1 thing (usually 5) and so it can be a struggle to learn a new skill, let alone one as difficult as data science. This week, I would say I am starting to get somewhere. I been using different classifiers across my data, checked its score and then looked at the confusion matrix. What that told me was that my data sucked badly, however, the upside was I could prove that my data was terrible.
Another thing I am doing at work around data (oh look at my focus) needed me to take some data and put a GUI over the top for people to be able to easy “ask the data questions”, I found a really cool free tool called Metabase which worked really nicely. All I needed to do was take an MS Access DB (oh boy who uses MS Access), convert it to CSV, and chuck it in a Postgres DB. Would have taken 5 mins on a PC, a Mac took a little bit longer!
So what’s new on the robot front this week? well VEX Worlds is in less than 25 days and the software is erm…. still in development. The EDR Tank should be on way to the US, so I made a mini version of it so that I can carry on with the development. I have written some safety features into the software so that I don’t mow down innocent kids, mouthy kids, will, of course, run over! The nex thing I need to do is finish the bridge between the VEX Cortex and the ROS software
I have a new friend on Facebook, (whoop whoop) who has been helping me with the ROS stuff, it’s useful to have a sounding board on learning new stuff, especially something as complex as ROS. I have a fear that the VEX Tank may not work too well with all the people moving about. Slam and autonomous driving works (very simple form) by identifying features in the environment to try and locate itself. when you have no real features (e.g a long corridor) or lots of things changing (e.g people moving about), it can get very confused. I am sure robotics engineers have a good solution to this, but being a beginner and using Hector Slam for the first time, I am not holding my breath. My mini raspberry Pi / LEGO version got confused if I farted near it, let alone 10,000 kids running around!
I started a statistics course as its the precursor to the Udacity Machine Learning Course.
I finished a Sentiment analysis course, pretty interesting, showed how to work out if a review to a film was positive or negative.
I watched Logan, was very good and rather violent and definitely not for the kids
I watched Kong, was pretty good but preferred the previous one, which to be fair is nothing like the new one.
I started printing the Inmoov project 🙂 THE BEST 3D Printed project in the world!