Posts by Tags

AWS

CPU vs GPU for deep learning

less than 1 minute read

If anyone is wondering why would you need to use AWS for machine learning after reading this post, here’s a real example. I’ve tried training the same model with the same data on CPU of my MacBook Pro (2.5 GHz Intel Core i7) and GPU of a AWS instance (g2.2xlarge).

Running TensorFlow with Jupyter Notebook on AWS

6 minute read

Google’s open source TensorFlow is one of the most promising machine learning frameworks nowadays. Even though Google is said to use a slightly different version internally, and the current version of TensorFlow is somewhat behind its competitors performance wise, one can hardly argue that it has a lot of potential.

CNN

End-to-end learning for self-driving cars

5 minute read

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator. The approach I took was based on a paper by Nvidia research team with a significantly simplified architecture that was optimised for this specific project.

Traffic signs classification with a convolutional network

14 minute read

This is my attempt to tackle traffic signs classification problem with a convolutional neural network implemented in TensorFlow (reaching 99.33% accuracy). The highlights of this solution would be data preprocessing, data augmentation, pre-training and skipping connections in the network.

Detecting facial keypoints with TensorFlow

15 minute read

This is a TensorFlow follow-along for an amazing Deep Learning tutorial by Daniel Nouri. Daniel describes ways of approaching a computer vision problem of detecting facial keypoints in an image using various deep learning techniques, while these techniques gradually build upon each other, demonstrating advantages and limitations of each.

Camera

Metal Camera Tutorial Part 2: Converting sample buffer to a Metal texture

4 minute read

In the first part of Metal Camera Tutorial series we managed to fire up a session that would continuously send us frames from device’s camera via a delegate callback. Now, this is already pretty exciting, but we need to get hold of actual textures to do something useful with it — and we are going to use Metal for that.

Metal Camera Tutorial Part 1: Getting raw camera data

6 minute read

A lot of apps nowadays use iPhone and iPad cameras. Some even do pretty badass things with it (performance wise), like running each frame through a neural network or applying a realtime filter. Either way you may want to get as low as you can in terms of the level at which you interact with the device hardware, be it getting data from a camera sensor or computations involving GPU — you still want to minimise the impact on device’s limited computational resources.

Classification

Detecting road features

16 minute read

The goal of this project was to try and detect a set of road features in a forward facing vehicle camera data. This is a somewhat naive way as it is mainly using computer vision techniques (no relation to naive Bayesian!). Features we are going to detect and track are lane boundaries and surrounding vehicles.

Traffic signs classification with a convolutional network

14 minute read

This is my attempt to tackle traffic signs classification problem with a convolutional neural network implemented in TensorFlow (reaching 99.33% accuracy). The highlights of this solution would be data preprocessing, data augmentation, pre-training and skipping connections in the network.

Computer vision

Visualizing lidar data

4 minute read

Arguably the most essential piece of hardware for a self-driving car setup is a lidar. A lidar allows to collect precise distances to nearby objects by continuously scanning vehicle surroundings with a beam of laser light, and measuring how long it took the reflected pulses to travel back to sensor.

Detecting road features

16 minute read

The goal of this project was to try and detect a set of road features in a forward facing vehicle camera data. This is a somewhat naive way as it is mainly using computer vision techniques (no relation to naive Bayesian!). Features we are going to detect and track are lane boundaries and surrounding vehicles.

End-to-end learning for self-driving cars

5 minute read

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator. The approach I took was based on a paper by Nvidia research team with a significantly simplified architecture that was optimised for this specific project.

Traffic signs classification with a convolutional network

14 minute read

This is my attempt to tackle traffic signs classification problem with a convolutional neural network implemented in TensorFlow (reaching 99.33% accuracy). The highlights of this solution would be data preprocessing, data augmentation, pre-training and skipping connections in the network.

Detecting facial keypoints with TensorFlow

15 minute read

This is a TensorFlow follow-along for an amazing Deep Learning tutorial by Daniel Nouri. Daniel describes ways of approaching a computer vision problem of detecting facial keypoints in an image using various deep learning techniques, while these techniques gradually build upon each other, demonstrating advantages and limitations of each.

Jupyter Notebook

Jupyter client for iPad

4 minute read

I have been a huge fan of Jupyter for a while now, and most importantly of the flexibility it is offering: I strongly believe that the fact that you only need a screen and network connection to get access to pretty much unlimited computational resources has enormous potential.

Meet Fenton (my data crunching machine)

11 minute read

This is how I built and configured my dedicated data science machine that acts as a remote backend for Jupyter Notebook and PyCharm. It is backed by a powerful Nvidia GPU and is accessible from anywhere, so that when it comes to machine learning tasks I am no longer constrained by my personal computer hardware performance.

Jupyter Notebook Xcode theme

less than 1 minute read

So one Saturday I got particularly bored and thought I should configure my Jupyter Notebook a bit.

Running TensorFlow with Jupyter Notebook on AWS

6 minute read

Google’s open source TensorFlow is one of the most promising machine learning frameworks nowadays. Even though Google is said to use a slightly different version internally, and the current version of TensorFlow is somewhat behind its competitors performance wise, one can hardly argue that it has a lot of potential.

Keras

End-to-end learning for self-driving cars

5 minute read

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator. The approach I took was based on a paper by Nvidia research team with a significantly simplified architecture that was optimised for this specific project.

ML

Jupyter client for iPad

4 minute read

I have been a huge fan of Jupyter for a while now, and most importantly of the flexibility it is offering: I strongly believe that the fact that you only need a screen and network connection to get access to pretty much unlimited computational resources has enormous potential.

Self-signed SSL certificate in Jupyter Permalink

less than 1 minute read

In order to use Jupyter Notebook on iPad, one needs to correctly configure SSL certificates. Since issuing a proper certificate from a trusted authority could be challenging in some cases, a self-signed certificate should suffice, provided it was signed by a CA that is trusted by device. Follow these steps to get it working on your iPad!

Visualizing lidar data

4 minute read

Arguably the most essential piece of hardware for a self-driving car setup is a lidar. A lidar allows to collect precise distances to nearby objects by continuously scanning vehicle surroundings with a beam of laser light, and measuring how long it took the reflected pulses to travel back to sensor.

Detecting road features

16 minute read

The goal of this project was to try and detect a set of road features in a forward facing vehicle camera data. This is a somewhat naive way as it is mainly using computer vision techniques (no relation to naive Bayesian!). Features we are going to detect and track are lane boundaries and surrounding vehicles.

Meet Fenton (my data crunching machine)

11 minute read

This is how I built and configured my dedicated data science machine that acts as a remote backend for Jupyter Notebook and PyCharm. It is backed by a powerful Nvidia GPU and is accessible from anywhere, so that when it comes to machine learning tasks I am no longer constrained by my personal computer hardware performance.

End-to-end learning for self-driving cars

5 minute read

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator. The approach I took was based on a paper by Nvidia research team with a significantly simplified architecture that was optimised for this specific project.

Traffic signs classification with a convolutional network

14 minute read

This is my attempt to tackle traffic signs classification problem with a convolutional neural network implemented in TensorFlow (reaching 99.33% accuracy). The highlights of this solution would be data preprocessing, data augmentation, pre-training and skipping connections in the network.

Detecting facial keypoints with TensorFlow

15 minute read

This is a TensorFlow follow-along for an amazing Deep Learning tutorial by Daniel Nouri. Daniel describes ways of approaching a computer vision problem of detecting facial keypoints in an image using various deep learning techniques, while these techniques gradually build upon each other, demonstrating advantages and limitations of each.

Cloud logger

2 minute read

Most of the tasks in data science are long-running, and many folks (me included) execute those tasks on remote machines. And the crucial thing for those tasks is logging: you do need to know how training process was going and see the learning curves. It would also be convenient if you could access those logs from anywhere and be notified when the process had finished. So I built the cloudlog!

Jupyter Notebook Xcode theme

less than 1 minute read

So one Saturday I got particularly bored and thought I should configure my Jupyter Notebook a bit.

CPU vs GPU for deep learning

less than 1 minute read

If anyone is wondering why would you need to use AWS for machine learning after reading this post, here’s a real example. I’ve tried training the same model with the same data on CPU of my MacBook Pro (2.5 GHz Intel Core i7) and GPU of a AWS instance (g2.2xlarge).

Running TensorFlow with Jupyter Notebook on AWS

6 minute read

Google’s open source TensorFlow is one of the most promising machine learning frameworks nowadays. Even though Google is said to use a slightly different version internally, and the current version of TensorFlow is somewhat behind its competitors performance wise, one can hardly argue that it has a lot of potential.

Metal

Metal Camera Tutorial Bonus: Running Metal project in iOS Simulator

2 minute read

In the Metal Camera Tutorial series we have created a simple app that renders camera frames on screen in real time. However, this app uses Metal framework, which is not available in iOS Simulator. Basically, your app won’t even build if you select simulator as a build device, which is a shame in case you want to add unit tests for example, being able to run them without actual device connected to your machine.

Metal Camera Tutorial Part 2: Converting sample buffer to a Metal texture

4 minute read

In the first part of Metal Camera Tutorial series we managed to fire up a session that would continuously send us frames from device’s camera via a delegate callback. Now, this is already pretty exciting, but we need to get hold of actual textures to do something useful with it — and we are going to use Metal for that.

Objective-C

Transitions with CoreAnimation

5 minute read

I have come across an interesting UX use case on medium.com recently: a concept of a banking mobile app. Not only this concept looks impressive when it comes to usability in comparison with pretty much every mobile banking app, it also has a couple of neat and engaging UI design tricks that really catch your eye.

Python

Jupyter client for iPad

4 minute read

I have been a huge fan of Jupyter for a while now, and most importantly of the flexibility it is offering: I strongly believe that the fact that you only need a screen and network connection to get access to pretty much unlimited computational resources has enormous potential.

Self-signed SSL certificate in Jupyter Permalink

less than 1 minute read

In order to use Jupyter Notebook on iPad, one needs to correctly configure SSL certificates. Since issuing a proper certificate from a trusted authority could be challenging in some cases, a self-signed certificate should suffice, provided it was signed by a CA that is trusted by device. Follow these steps to get it working on your iPad!

Visualizing lidar data

4 minute read

Arguably the most essential piece of hardware for a self-driving car setup is a lidar. A lidar allows to collect precise distances to nearby objects by continuously scanning vehicle surroundings with a beam of laser light, and measuring how long it took the reflected pulses to travel back to sensor.

Detecting road features

16 minute read

The goal of this project was to try and detect a set of road features in a forward facing vehicle camera data. This is a somewhat naive way as it is mainly using computer vision techniques (no relation to naive Bayesian!). Features we are going to detect and track are lane boundaries and surrounding vehicles.

End-to-end learning for self-driving cars

5 minute read

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator. The approach I took was based on a paper by Nvidia research team with a significantly simplified architecture that was optimised for this specific project.

Traffic signs classification with a convolutional network

14 minute read

This is my attempt to tackle traffic signs classification problem with a convolutional neural network implemented in TensorFlow (reaching 99.33% accuracy). The highlights of this solution would be data preprocessing, data augmentation, pre-training and skipping connections in the network.

Detecting facial keypoints with TensorFlow

15 minute read

This is a TensorFlow follow-along for an amazing Deep Learning tutorial by Daniel Nouri. Daniel describes ways of approaching a computer vision problem of detecting facial keypoints in an image using various deep learning techniques, while these techniques gradually build upon each other, demonstrating advantages and limitations of each.

Cloud logger

2 minute read

Most of the tasks in data science are long-running, and many folks (me included) execute those tasks on remote machines. And the crucial thing for those tasks is logging: you do need to know how training process was going and see the learning curves. It would also be convenient if you could access those logs from anywhere and be notified when the process had finished. So I built the cloudlog!

Regression

End-to-end learning for self-driving cars

5 minute read

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator. The approach I took was based on a paper by Nvidia research team with a significantly simplified architecture that was optimised for this specific project.

Detecting facial keypoints with TensorFlow

15 minute read

This is a TensorFlow follow-along for an amazing Deep Learning tutorial by Daniel Nouri. Daniel describes ways of approaching a computer vision problem of detecting facial keypoints in an image using various deep learning techniques, while these techniques gradually build upon each other, demonstrating advantages and limitations of each.

Swift

Metal Camera Tutorial Bonus: Running Metal project in iOS Simulator

2 minute read

In the Metal Camera Tutorial series we have created a simple app that renders camera frames on screen in real time. However, this app uses Metal framework, which is not available in iOS Simulator. Basically, your app won’t even build if you select simulator as a build device, which is a shame in case you want to add unit tests for example, being able to run them without actual device connected to your machine.

Swift: Type of a class conforming to protocol

1 minute read

Although protocols are not by any means a new thing, Swift specifically encourages the developers to use it over inheritance. Not that Objective-C didn’t make use of protocols, but due to the dynamic nature of Objective-C Runtime one would be tempted to put chunks of common declarations in a superclass instead.

Metal Camera Tutorial Part 2: Converting sample buffer to a Metal texture

4 minute read

In the first part of Metal Camera Tutorial series we managed to fire up a session that would continuously send us frames from device’s camera via a delegate callback. Now, this is already pretty exciting, but we need to get hold of actual textures to do something useful with it — and we are going to use Metal for that.

Metal Camera Tutorial Part 1: Getting raw camera data

6 minute read

A lot of apps nowadays use iPhone and iPad cameras. Some even do pretty badass things with it (performance wise), like running each frame through a neural network or applying a realtime filter. Either way you may want to get as low as you can in terms of the level at which you interact with the device hardware, be it getting data from a camera sensor or computations involving GPU — you still want to minimise the impact on device’s limited computational resources.

Unit tests for Touch ID

2 minute read

Writing unit tests for iOS apps had been challenging for a while, mainly due to a lack of solid and stable testing capabilities out of the Xcode’s box. However, with the Apple’s XCTest framework things have improved greatly: you no longer have an excuse of needing 3rd party frameworks to test your code properly.

Motion Sensors in iOS

1 minute read

Apple mobile devices have so many capabilities nowadays, that it is not always obvious where this or that functionality is coming from. Have you ever thought of how the Google Cardboard VR apps work? The answer is — they all use device motion sensors, be it an Android or iOS device.

TensorFlow

Traffic signs classification with a convolutional network

14 minute read

This is my attempt to tackle traffic signs classification problem with a convolutional neural network implemented in TensorFlow (reaching 99.33% accuracy). The highlights of this solution would be data preprocessing, data augmentation, pre-training and skipping connections in the network.

Detecting facial keypoints with TensorFlow

15 minute read

This is a TensorFlow follow-along for an amazing Deep Learning tutorial by Daniel Nouri. Daniel describes ways of approaching a computer vision problem of detecting facial keypoints in an image using various deep learning techniques, while these techniques gradually build upon each other, demonstrating advantages and limitations of each.

CPU vs GPU for deep learning

less than 1 minute read

If anyone is wondering why would you need to use AWS for machine learning after reading this post, here’s a real example. I’ve tried training the same model with the same data on CPU of my MacBook Pro (2.5 GHz Intel Core i7) and GPU of a AWS instance (g2.2xlarge).

Running TensorFlow with Jupyter Notebook on AWS

6 minute read

Google’s open source TensorFlow is one of the most promising machine learning frameworks nowadays. Even though Google is said to use a slightly different version internally, and the current version of TensorFlow is somewhat behind its competitors performance wise, one can hardly argue that it has a lot of potential.

Tutorial

Meet Fenton (my data crunching machine)

11 minute read

This is how I built and configured my dedicated data science machine that acts as a remote backend for Jupyter Notebook and PyCharm. It is backed by a powerful Nvidia GPU and is accessible from anywhere, so that when it comes to machine learning tasks I am no longer constrained by my personal computer hardware performance.

Metal Camera Tutorial Bonus: Running Metal project in iOS Simulator

2 minute read

In the Metal Camera Tutorial series we have created a simple app that renders camera frames on screen in real time. However, this app uses Metal framework, which is not available in iOS Simulator. Basically, your app won’t even build if you select simulator as a build device, which is a shame in case you want to add unit tests for example, being able to run them without actual device connected to your machine.

Metal Camera Tutorial Part 2: Converting sample buffer to a Metal texture

4 minute read

In the first part of Metal Camera Tutorial series we managed to fire up a session that would continuously send us frames from device’s camera via a delegate callback. Now, this is already pretty exciting, but we need to get hold of actual textures to do something useful with it — and we are going to use Metal for that.

Running TensorFlow with Jupyter Notebook on AWS

6 minute read

Google’s open source TensorFlow is one of the most promising machine learning frameworks nowadays. Even though Google is said to use a slightly different version internally, and the current version of TensorFlow is somewhat behind its competitors performance wise, one can hardly argue that it has a lot of potential.

Metal Camera Tutorial Part 1: Getting raw camera data

6 minute read

A lot of apps nowadays use iPhone and iPad cameras. Some even do pretty badass things with it (performance wise), like running each frame through a neural network or applying a realtime filter. Either way you may want to get as low as you can in terms of the level at which you interact with the device hardware, be it getting data from a camera sensor or computations involving GPU — you still want to minimise the impact on device’s limited computational resources.

UI/UX

Mobile app navigation: designing a questionnaire

5 minute read

There are quite a few potential scenarios where you may want your user to go through a set of questions, take a test or simply provide feedback. I hope this post will give you a useful example of interacting with the user on a mobile device, and will inspire you to design something straightforward and clear next time you face a similar challenge.

Transitions with CoreAnimation

5 minute read

I have come across an interesting UX use case on medium.com recently: a concept of a banking mobile app. Not only this concept looks impressive when it comes to usability in comparison with pretty much every mobile banking app, it also has a couple of neat and engaging UI design tricks that really catch your eye.

Unit Tests

Metal Camera Tutorial Bonus: Running Metal project in iOS Simulator

2 minute read

In the Metal Camera Tutorial series we have created a simple app that renders camera frames on screen in real time. However, this app uses Metal framework, which is not available in iOS Simulator. Basically, your app won’t even build if you select simulator as a build device, which is a shame in case you want to add unit tests for example, being able to run them without actual device connected to your machine.

Unit tests for Touch ID

2 minute read

Writing unit tests for iOS apps had been challenging for a while, mainly due to a lack of solid and stable testing capabilities out of the Xcode’s box. However, with the Apple’s XCTest framework things have improved greatly: you no longer have an excuse of needing 3rd party frameworks to test your code properly.

iOS

File system permissions and paths in iOS Permalink

less than 1 minute read

Although Juno makes coding on iPad a breeze, there are still some tricks you need to know — one of them is working with the file system and handling paths. For example, when your code is supposed to read file’s contents or write data to a file, how do you specify file’s location in iOS?

Jupyter client for iPad

4 minute read

I have been a huge fan of Jupyter for a while now, and most importantly of the flexibility it is offering: I strongly believe that the fact that you only need a screen and network connection to get access to pretty much unlimited computational resources has enormous potential.

Metal Camera Tutorial Bonus: Running Metal project in iOS Simulator

2 minute read

In the Metal Camera Tutorial series we have created a simple app that renders camera frames on screen in real time. However, this app uses Metal framework, which is not available in iOS Simulator. Basically, your app won’t even build if you select simulator as a build device, which is a shame in case you want to add unit tests for example, being able to run them without actual device connected to your machine.

Swift: Type of a class conforming to protocol

1 minute read

Although protocols are not by any means a new thing, Swift specifically encourages the developers to use it over inheritance. Not that Objective-C didn’t make use of protocols, but due to the dynamic nature of Objective-C Runtime one would be tempted to put chunks of common declarations in a superclass instead.

Metal Camera Tutorial Part 2: Converting sample buffer to a Metal texture

4 minute read

In the first part of Metal Camera Tutorial series we managed to fire up a session that would continuously send us frames from device’s camera via a delegate callback. Now, this is already pretty exciting, but we need to get hold of actual textures to do something useful with it — and we are going to use Metal for that.

Metal Camera Tutorial Part 1: Getting raw camera data

6 minute read

A lot of apps nowadays use iPhone and iPad cameras. Some even do pretty badass things with it (performance wise), like running each frame through a neural network or applying a realtime filter. Either way you may want to get as low as you can in terms of the level at which you interact with the device hardware, be it getting data from a camera sensor or computations involving GPU — you still want to minimise the impact on device’s limited computational resources.

Unit tests for Touch ID

2 minute read

Writing unit tests for iOS apps had been challenging for a while, mainly due to a lack of solid and stable testing capabilities out of the Xcode’s box. However, with the Apple’s XCTest framework things have improved greatly: you no longer have an excuse of needing 3rd party frameworks to test your code properly.

Motion Sensors in iOS

1 minute read

Apple mobile devices have so many capabilities nowadays, that it is not always obvious where this or that functionality is coming from. Have you ever thought of how the Google Cardboard VR apps work? The answer is — they all use device motion sensors, be it an Android or iOS device.

Mobile app navigation: designing a questionnaire

5 minute read

There are quite a few potential scenarios where you may want your user to go through a set of questions, take a test or simply provide feedback. I hope this post will give you a useful example of interacting with the user on a mobile device, and will inspire you to design something straightforward and clear next time you face a similar challenge.

Transitions with CoreAnimation

5 minute read

I have come across an interesting UX use case on medium.com recently: a concept of a banking mobile app. Not only this concept looks impressive when it comes to usability in comparison with pretty much every mobile banking app, it also has a couple of neat and engaging UI design tricks that really catch your eye.