Unit Testing and why it should be used
What is Unit Testing?
Unit testing is where the program is broken down into a series of units, or functions, or areas - each of these is tested individually/standalone in a lot of detail. This allows us to check that each function works as it should. So if we give one of our methods/functions a set of inputs we can then verify that we get the expected output for each example.We will need to check our functions with not only predictable values, but also borderline values and just as importantly totally unexpected values (e.g. wrong datatype, size, empty, irrelevant...)
Only the characteristics of that unit need to be tested, as everything else in the application will be covered by the other unit tests.
It is usually an automated process, once the tests are written they will run automatically when the tests are configured to run.
Benefits of Unit Testing
- Identify failures in our code BEFORE it gets integrated with the larger application
- Allows you to continue verifying that your method still works as expected in all (tested) cases while your refactor/ change the logic in the methods body
- If your approaching development from a unit test perspective, you'll likely be writing code that is easier to test - more modular, clear, standalone methods - this is better code.
- Prevent future changes from breaking functionality.
- They help you really understand the design of your code
- They give you instant feedback, and that green tick when they all pass is so satisfying!
- Faster to develop more robust code
- They can help with code reuse
- Forces better code documentation
Here are some more reasons why you should test, from the SO community:
Disadvantages of Unit Testing
- When your just getting started, they can be time consuming while you get used to them. Ultimately they will save time in the long run, but it doesn't always feel like that.
- Learning curve. Although the principle of unit testing is very simple, when you actually sit down to write unit tests for the first time, it is often hard to know that and how you should be testing each module. A solution to this is to look at example tests on the internet. For example all decent Node.js projects and modules on GitHub will have a test directory, that you can read or run.
- Trying to implement unit tests to legacy code/ code not written nicely is sometimes close to impossible. The solution to this, is don't wirte bad code in the first place and write tests first or during development.
As you can see the advantages massively outweigh the very weak disadvantages.
Unit testing confirms the code your writing is awesome!
A good unit test is:
- Able to be fully automated
- Has full control over all the pieces running (Use mocks or stubs to achieve this isolation when needed)
- Can be run in any order if part of many other tests
- Runs in memory (no DB or File access, for example)
- Consistently returns the same result (You always run the same test, so no random numbers, for example. save those for integration or range tests)
- Runs fast
- Tests a single logical concept in the system
- Readable
- Maintainable
- Trustworthy (when you see its result, you don’t need to debug the code just to be sure)
Links for Further Reading
Unit testing in AngularJS
Unit testing in PHP
http://code.tutsplus.com/articles/the-beginners-guide-to-unit-testing-what-is-unit-testing--wp-25728
Unit testing in Android
Unit testing in NodeJs
Unit testing in Swift
Coverage testing in Node.js with Istanbul
What is Coverage Testing?
Coverage testing determines what proportion of your source code is covered by your tests.
It's useful to be able to check this as your developing/writing tests/testing so that you can aim as close as possible for 100%
How do I use Coverage Testing in my Node.js / JavaScript application?
It is really easy to run coverage tests in your project, even easier if you are using a testing framework and have everything set up.
There is a Node module called Istanbul, which takes care of everything for you.
Read more about Istanbul here: https://github.com/gotwarlost/istanbul
npm install istanbul --save-dev
npm install istanbul --global
2. Run a quick coverage test on one of your test files like so
istanbul cover test/my-test.js
3. Add a script tag to your package.json to run a coverage command on ALL test's at once, easily
"scripts": { "start": "node app.js", "test": "mocha", "cover": "istanbul cover node_modules/mocha/bin/_mocha --dir ./reports/coverage" }
(this works great for Mocha setup, and on Windows, may be slightly different for other testing frameworks)
4. Run your coverage tests!
npm run cover
You should see a nice summary of your coverage in the console.
Also check out the more detailed HTML report that Istanbul kindly created for you.
Don't forget to add the reports directory to your .gitignore !
Check out the example project on GitHub:
Setting up a unit testing environment in Node.js
Example Project
Introduction
In this post we'll go through the complete process of setting up your test environment in a Node.js app, and then write a few simple unit tests. Although I've aimed this for Node apps, you should be able to follow these steps for any JavaScript setup such as Ionic.
Before reading reading this article you should already know:
- What unit testing is, follow the link below for more about general unit testing
- What TDD is, follow the link below for more about TDD
1. Setting up a Testing Framework
A testing framework will take care of the overall test structure for us and make it easy to run our tests and allow us to use additional plugins too if necissary (such as generating HTML test reports, or providing coverage test features). It is possible to do all this with vanilla JavaScript, but a lot more work
The two most popular test frameworks are Mocha and Jasmine. For this we will be using Mocha.
Since we will use the 'mocha' command, we must first install Mocha globally. If not you will get the error message 'mocha is not recognized as an internal or external command' (or mac equivalent).
npm install mocha --global
If your using git, or have multiple developers working on the project you'll also want to add mocha to your package.json so others can just run npm install to populate their node_modules.
First initialize your package.json (if you haven't already done so):
npm init
Then add mocha to your devDepenencies and save in node_modules by running
npm install mocha --save-dev
Next we need to create a directory to store our tests. By default mocha will look for a folder called 'test' inside the projects root, so we must use this name exactly.
mkdir test
Summary
The following commands can be used to setup mocha in a new project. (mac commands might be slightly different)
mkdir test-example && cd test-example :: create new projet director npm i mocha -g :: install mocha globally so the mocha command can be used npm init :: initialise package.json npm i mocha -D :: add mocha to your projects devDependancies mkdir test :: create new folder for tests
2. Setting up an Assertion Library
Node does have some assertion test functionality built in, although it is quite basic, and not particularly nice to use. So instead we are going to use Chai as an assertion library - there several others out there, but chai is good and well established with good documentation.
So first off we need to install chai to our project like we do with all node modules:
npm install chai --save-dev
Next we are going to create a file to put our test in. Remember this should be inside the test directory. The file can be called what ever you like (but keep it relevant), and the extension should be .js (or .coffee if your writing your tests in CoffeeScript).
Inside your new test file, the first step is, as usually done with node modules, to include the module in your file.
If you visit the Chai website(http://chaijs.com/) you will see that chai has three interfaces: Should, Expect and Assert.
It is up to you which one you use, and it's easily possible to use a combination. They all work in a similar way, with the only main difference being the syntax and the structure of the test blocks you write. Should is much more like English, and Assert is more like conventional JUnit comparisons, Expect is somewhere in between.
Inside your new test file, the first step is, as usually done with node modules, to include the module in your file.
var chai = require('chai');
If you visit the Chai website(http://chaijs.com/) you will see that chai has three interfaces: Should, Expect and Assert.
It is up to you which one you use, and it's easily possible to use a combination. They all work in a similar way, with the only main difference being the syntax and the structure of the test blocks you write. Should is much more like English, and Assert is more like conventional JUnit comparisons, Expect is somewhere in between.
So if we were using expect, the fist thing we would need to do is define the expect method.
The format of your tests will have a describe block for what the code module should have been tested for, followed by a series of it's containing the Chai tests, in this format:
The format of an expect method is like this:
Example from the Chai documentation, view more here: http://chaijs.com/guide/styles/#expect
You can see a full list of flags here: https://docs.npmjs.com/misc/config
The above command will set the reporter (how results are displayed) to be Nyan cat (check it out, it's pretty cool), and the recursive flag will mean that mocha will also execute tests in sub-directories.
But, it's a bit of a pain having to specify all those flags instead of just calling the mocha command. So you can instead create a file called 'mocha.opts' inside your test directory. In this file you can specify a list of flags. Then you can just run the mocha command, without typing any parameters in the command line.
Put each flag on a new line, and use long flags rather than short-hand so that it's clear for other developers. Here is an example mocha.opts file:
https://github.com/mochajs/mocha/blob/master/test/mocha.opts
var expect = chai.expect;
The format of your tests will have a describe block for what the code module should have been tested for, followed by a series of it's containing the Chai tests, in this format:
describe('JavaScript example test', function(){ it('should return true in JavaScript',function(){ expect(true).equal(true); }); });
The format of an expect method is like this:
expect(foo).to.be.a('string'); expect(foo).to.equal('bar'); expect(foo).to.have.length(3); expect(tea).to.have.property('flavors') .with.length(3);
Example from the Chai documentation, view more here: http://chaijs.com/guide/styles/#expect
3. Running Tests, and specifying additional options
Once you have a sample test, like above. You can run it to see the result.
To run all tests in the command line, use the following command:
mocha
Running tests from a single file
If you'd like to run just tests from a single file, you can type mocha followed by the path to the test file.
Specifying test path in package.json
It's good practice to specify a command to execute your tests in your package.json.
This will allow other developers to run 'npm test' on any project whatever testing framework or setup is implemented.
In the scripts section, where you've specified the entry point, add a test command:
"scripts": { "start": "node ./bin/www", "test": "mocha", }
This will mean you can run npm test and npm will run the mocha command.
Passing additional parameters to mocha
With mocha, it's possible to pass it additional parameters to specify options such as how tests are run.
You can change things like, how results are displayed, what language you write your tests in (e.g coffee), whether it should look in sub-directories or not, the timeout etc......
This can be done with the ordinary flag syntax, e.g.
mocha --reporter nyan --recursive
You can see a full list of flags here: https://docs.npmjs.com/misc/config
The above command will set the reporter (how results are displayed) to be Nyan cat (check it out, it's pretty cool), and the recursive flag will mean that mocha will also execute tests in sub-directories.
But, it's a bit of a pain having to specify all those flags instead of just calling the mocha command. So you can instead create a file called 'mocha.opts' inside your test directory. In this file you can specify a list of flags. Then you can just run the mocha command, without typing any parameters in the command line.
Put each flag on a new line, and use long flags rather than short-hand so that it's clear for other developers. Here is an example mocha.opts file:
https://github.com/mochajs/mocha/blob/master/test/mocha.opts
4. Coverage Testing
See coverage testing confluence article here:
Introduction to Test Driven Development
First off, what do tests provide us with?
- Documentation code
- Catch future errors
- Long-term time savings - because errors have been found before anythings been deployed to production
Although all the above are true, using tests like this is just a tool - not a process.
What is TDD?
In it's simplest form TDD comes down to the following process:
- Decide what the code will do
- Write a test that will pass if the code does that thing
- Run the test, to prove it will fail
- Write the code
- Run the test again, to see it pass
It's important to note that you must not write the code, until you've written the test.
Also it is essential to ensure the test actually fails first, it is surprisingly easy to make a small mistake in your test case that means your test will always pass, and that's not the type of error that anyone likely to look into. It's also sometimes necessary to back-test, where you break the code to show that the test fails.
This should be done for every couple of lines of code, every method.
What does TDD provide?
- Design and plan before you code
- Documenting your design before you build it
- Proving that the code implements that design
- Encouraging the design of testable code - very important!!
Testable code, is good code!
This is because if you have long methods/ functions with loads of if statements and stuff, it's just not possible to write tests. If you write the tests first, you can't write the code that is too complicated.
Testable code is:
- Modular, as we're forced to break things down so we can test them
- Decoupled design, if our objects or methods are too tightly interwoven, we can't test them independantly.
- Methods should have limited scope, and not trying to do too much in one place
- etc...
Basically good testable code will have a much lower cyclomatic complexity. This is the measure of how many different paths there are through the code, so essentially every conditional statement you add, will give you another route and another set of tests.
If your finding the test complicated to write, then that's a code smell, your going about it the wrong way.
Result of TDD
Better code in less time *
*so it might not feel like it's going faster, because it's a process rather than just hacking. And processes feel tedious. Also it may take some practice to get up to speed, but it's fully worth it in the long run for speed and code quality.
Use your judgement about when to test
Although nearly all code should be tested thoroughly there are some exceptions:
- Some things are too hard to test - especially where external services are involved
- Some tests are too trivial to be useful
- Over-testing is possible
- Exploritory codeing, whern your not sure how it's going to be used. SO not for production code.
Links for learning more about TDD
- A really clear explanation of what TDD is, and how to implement it. Quite long, but thourough and easy to understand. I'd recommend starting with this, if you know nothing about TDD.
- http://code.tutsplus.com/tutorials/the-newbies-guide-to-test-driven-development--net-13835 (ignore the bits specific to PHP, the rest is quite generic)
- Good article if your using TDD withh JUnit in Java.
- Very short introduction to TDD with Node.js, using Mocha
- Not free unless you can use the trial, but this series is very complete from Lynda, and worth a look:
- Quite a long video explaining why TDD is so important to use
- If you want to learn the difference between TDD and BDD, then this video clearly explains it in 5 minutes
- When your PM finds out you didn't follow TDD:
How to write a gulpfile
Setting up a new project and getting it ready for Gulp
Gulp is simple to set up. Presuming you have Node.js already installed:
- In the command line, navigate into the root of your project working directory
- Install Gulp with npm install gulp --save-dev what this will do will add gulp into your node_modules folder. The --save-dev part will add gulp to your devDependacies in your package.json file. It is similar to --save/-s only it's a dependency that's only required for development of your app.
- Create a new JavaScript file in your project root directory called gulpfile.js
- This is the file where we will put all our build configuration in...
How to write the gulpfile.js
Install Plugins
Firstly you'll need to install and include the plugins you need. Every task in Gulp uses a plugin. For this example we'll be compiling sass.
If you haven't already done so, in your console run:
npm install gulp --save-dev
npm install gulp-sass --save-dev
This will install both gulp and our first plugin in, gulp-sass. It will also add the dev dependency to our package.json
Require Necessary Modules
Back to the gulpfile.js in the same way that you'd use any other node module we need to include it. So in the top of your gulpfile.js paste the following code:var gulp = require('gulp');
var sass = require('gulp-sass');
Creating a gulp task
Now we need to actually write the code to tell gulp what to do with this plugin.
To do this we call the task method in gulp. This method takes to parameters, firstly a string which can be what ever you want to call your task (in this example I called is sass - seems to make sense).
Secondly we pass it a function that does the work. The format of this function is:
- First pass is a glob of which files and folders to look for, this is done with the gulp.src method (in this example it's all files withing the scss folder with the file extension .scss).
- Then we call gulp's pipe method on that file selection, where we pass it the plugin as a parameter.
- Then finally we give gulp a destination location where the processed files should be saved. We do this using the gulp.dest method.
gulp.task('sass', function() { return gulp.src('scss/*.scss') .pipe(sass()) .pipe(gulp.dest('css')); });
Running the Gulp task
Try testing out what we wrote above by running the following command in the console
gulp sass
This will look for the gulpfile.js in current directory, then look for the task called 'sass' and run it.
What you should see is all your sass code inside the sccss directory is compiled to css and aved in your css directory.
Watching files for changes
Now what would be really good is if gulp could just wait until everytime we make a change to our sass and then compile it into CSS automatically. This is actually really easy to set up using gulp watch.
gulp.task('watch', function() { gulp.watch('scss/*.scss', ['sass']); });
We have named this task 'watch', and what it is doing is watching for changes in .scss files inside the scss folder and then running the 'sass' task.
If you also had a coffeescript task, you could just add another line inside this method looking something like this:
gulp.watch('cscripts/*.coffee', ['coffee']);
Including Multiple Plugins in a single task
It's strait-forward to run multiple operations at once. For example process all your scripts in one task, or process all your images in another. e.g.
gulp.task('scripts', function() { return gulp.src('js/*.js') .pipe(concat('all.js')) .pipe(gulp.dest('dist')) .pipe(rename('all.min.js')) .pipe(uglify()) .pipe(gulp.dest('dist')); });
Default Task
If we name a gulp task 'default' it becomes the default task and you can run it by simply running
gulp
(instead of gulp task-name)
We can set our default task to run several tasks for us. For example:
gulp.task('default', ['sass', 'lint', 'coffee', 'watch']);
(presuming you already have a 'sass', 'lint', 'coffee' and 'watch' task) it will run all the listed task.
Prerequisite Tasks
In a similar way, you can set prerequisite tasks to run, by listing them after the task name.gulp.task('coffee', ['coffee-lint'], function(){ return gulp.src('cs/*.coffee') .pipe(coffee()) .pipe(gulp.dest('dist')); });
Introduction to automating your tasks with the gulp.js build tool
What is Gulp?
Gulp.js is a streaming build system built on Node.js. This basically means that it can be configured to perform repetitive tasks and coding operations automatically during development. For example it can compile all your coffee scr
ipt whenever your file changes, or it can minify your CSS, or maybe synchronize all your development browsers and constantly refresh them on file change.
ipt whenever your file changes, or it can minify your CSS, or maybe synchronize all your development browsers and constantly refresh them on file change.
Gulp uses a variety of plugins to do these tasks, and there is a plugin to do pretty much everything you'd need to do very easily. If you can't find one to do a particular operation, you can make your own ;)
Why do I need to use a build system?
For all modern web applications (hybrid apps, sites, web backends...) there are certain tasks that are almost essential to ensure high quality. For example checking your JavaScript for errors, minifying it, concatenating it. There are also certain tasks that just make developing easier, like having your your app tested in every browser and screen size whenever your file changes or monitoring file sizes and network requests.
It is true that it is possible to do most of these tasks without a build system or tool in place, but using something like Gulp or Grunt is much more efficient, easy to use, fast and keeps development code to a minimum and all in one place.
Example Gulpfile for a typical Node.js Express app
I've created a Gulpfile for a typical Node Express project that uses coffee script and Less. It is just intended as a working example of how you can integrate everything together, so you can modify it to meet your specific project needs.
Setting up example project
- Open the console and navigate into a new working directory
- Run the command: git clone https://github.com/Lissy93/gulp-example.git
- Install the dependencies by running: npm install
- Start the gulp script bu running: gulp
So what the above steps should have done is: download the example project from GitHub, install all it's dependenceies found in the package.json and put them in the node_modules folder. Running gulp will then call the default task inside the gulpfile.js
What this project does
If you look in the gulpfile.js you'll see there's a whole load of tasks that are being covered. Mainly around processing the CSS and JavaScript ready for production. You can view the full list of tasks in the readme.md for the Git repo.
Testing it out
So once you've run the above commands in the terminal, if everything worked as it should have done your web browser should have opened. If it didn't try visiting http://localhost:4000. (If there is nothing, then check the console for errors.)
Browser Sync
If you open another browser and view the same URL you'll notice that the two browsers are in sync. So if you scroll down on one, the other will scroll, if you click a link on one all browsers will follow. This is really really useful testing your app out on a range of browsers and screen sizes all at once without having to even do any clicking, works better if you have a decent number of monitors ;)
It's done using a gulp plugin called browser-sync.
Nodemon
Secondly you'll notice if you make any changes to any of the jade templates or views it will update live, across all your browsers as you code. No refreshing needed :) (you do need to set your IDE to autosave on keyup though, which should be default if your using any half decent ide). This is done using nodemon in gulp.
Linting, Compiling, Concatinating, Piping.... styles and scripts
Now for the coolest part, in your working directory open up the sources folder. If you edit any of the CSS, Less, JavaScript, CoffeeScript files you'll see that as it saves it creates a new version of the production code in your public directory, and refreshes the browsers accordingly. The code in the public directory is all minified and had everything else done to it to make it awesome and really efficient. Check the console for a list of all the tasks gulp has just done.
Exercises
- Try creating and modifying the JavaScript and CoffeeScript files in the javascript source directory, then look in the public directry and see what they're looking like in production form.
- In a similar way modify the CSS and Less file, you should see the changes in the browser
- Have a read through gulpfile.js and modify the configuration to suit your project, then test it out.
- Install a new gulp-plugin and set it up by seeing how the rest have been done
- Try running some of the tasks individually, for example gulp clean should just clean the public directory and gulp-watch should just watch for changes and update files accordingly.
If the console freezes, cancel the process (Ctrl+C) and rerun gulp.
Smart Depart App - AngelHack 2015 - HP Prize Winner
Smart Depart is an app which monitors your predefined route into work, and will wake you up earlier if there are delays, ensuring your never late for meetings.
Currently it is integrated with TFL, but we plan to start integrating it with National Rail, traffic and weather data too.
The client is an iOS app written by Ollie, and links in with the backend which is written in Node.js
Smart Depart is also integrated with HP OnDemand IDOL, where we used Sentimant analysis API.
We used this tool to analyes multiple body's of Tweets, fetched from witin a given time frame and Tweeting about a particualr London Undergraound line. This enables us to determine whather the general attitude towards each undergraoud line is positive or negative and apply an extra layer of contextutal information to our application.
Currently it is integrated with TFL, but we plan to start integrating it with National Rail, traffic and weather data too.
The client is an iOS app written by Ollie, and links in with the backend which is written in Node.js
Smart Depart is also integrated with HP OnDemand IDOL, where we used Sentimant analysis API.
We used this tool to analyes multiple body's of Tweets, fetched from witin a given time frame and Tweeting about a particualr London Undergraound line. This enables us to determine whather the general attitude towards each undergraoud line is positive or negative and apply an extra layer of contextutal information to our application.
Hackathon.io page: http://www.hackathon.io/projects/7417
Trello story board: https://trello.com/b/DIE2RuFO/smart-depart
All the code written was open source:
We created several open source node modules that we also published over the weekend too
TFL Journey Planner: https://github.com/Lissy93/node-tfl-journey-planner
Live TFL line sentiments: https://github.com/Lissy93/london-underground-live-sentiment-analysis
View web portal: http://smart-depart.herokuapp.com/
Sentiment Analysis: http://smart-depart.herokuapp.com/sentiment-analysis
More about AngelHack Londno
Ollie and I a few hours in |
Polymer and Moder Web API's
Polymer is part of the web platform team, and it officially began 3 years ago - but last week Google announced that 1.0 has been released, and ready for production. Previously building web apps across multiple platforms and form factors was really challenging, different components are not always designed to work together - the answer to this is web components.
Web components allow custom components to be used everywhere, and they are interoperable, meaning they add another layer of functionality above the platform but below other frameworks. Web components standardise everything.
Polymer is the library for building web components, it makes it fast and easy to build web components that can be used everywhere. Polymer is not a framework - because web components are not a framework, web components with polymer are not replacing anything else, they can work with everything else.
1.0 also has a lot of new features. Firstly shady dom which replaces the shadow polymer dom. It is simpler implementation.
Another core new feature in 1.0 is teeming and styling with CSS custom properties. Web components introduced scoping and custom CSS selectors.
Web components allow custom components to be used everywhere, and they are interoperable, meaning they add another layer of functionality above the platform but below other frameworks. Web components standardise everything.
Polymer is the library for building web components, it makes it fast and easy to build web components that can be used everywhere. Polymer is not a framework - because web components are not a framework, web components with polymer are not replacing anything else, they can work with everything else.
Polymer 1.0
Polymer 1.0 is brand new, every line of code has been re-written in the past year, so that it is considerably faster, less-complex and generally and better than the previous 0.x versions. It is 3 times faster on Chrome (than previous versions), 4 times faster on mobile safari - and 30% less code overall. The whole thing is only (19kb - ) 42 kb, including all the polyfills...1.0 also has a lot of new features. Firstly shady dom which replaces the shadow polymer dom. It is simpler implementation.
Another core new feature in 1.0 is teeming and styling with CSS custom properties. Web components introduced scoping and custom CSS selectors.
Polymer Elements
Initially there were two main branches of components in polymer. The iron elements and the paper elements. Google have introduced three new branches.
Firstly, the Google web components. So ifc you need to add Google maps for example, use the Google Map tag. There are elements for all of Googles core web services introduced. It's a new Google SDK for the web created through these elements.
A second branch of elements introduced are the platinum elements, these bring together powerful features such as service workers. So dropping push notifications on to your page, or offline cashing, or anything like that - just put the approprieate element into your page.
The gold elements, these include mobile and web ecomerce and high quality check out processes. Such as verifying credit card details.
Google have also created a catalogue of polymer elements https://elements.polymer-project.org/
@suggest_movies Friction free social movie recommendations
Send a tweet to @suggest_movies and receive a personalized movie recommendation back.
@suggest_movies will analyse your public tweets from your Twitter profile and use personality insights powered by IBM Watson to create a profile of your character and determine what movie genre's you'd be interested in. It will then select a movie that it thinks you'd like and tweet it back to you along with a link of where you can watch it.
There is also a cinema mode, so if you use the word showing, cinema, nearby.... it will use it's recommendations engine to choose you a film that is showing in a cinema near you, and also give you details and a link to book your ticket.
@suggest_movies will analyse your public tweets from your Twitter profile and use personality insights powered by IBM Watson to create a profile of your character and determine what movie genre's you'd be interested in. It will then select a movie that it thinks you'd like and tweet it back to you along with a link of where you can watch it.
There is also a cinema mode, so if you use the word showing, cinema, nearby.... it will use it's recommendations engine to choose you a film that is showing in a cinema near you, and also give you details and a link to book your ticket.
All the code is open source and available on GitHub here
Source available on GitHub |
Oh yeah and we came second, winning way too many movies and a bit of money which was nice :)
Presenting |
The live demo |
Tweets by @suggest_movies
Chilton 100
The Chilton 100 was a 122km /177km sportive in the Chilton Hills
http://humanrace.co.uk/events/cycling/chiltern-100-sportive
http://humanrace.co.uk/events/cycling/chiltern-100-sportive
Google IO - What's new in Android
At Google IO 2015 some exciting new announcements were made about new features in the Android operating system for developers.
Developers can now build apps for Android M by downloading the developer preview (API 22) and latest SDK. There will be a couple of versions of previews which will be improved before the final version is released.
Developers can now build apps for Android M by downloading the developer preview (API 22) and latest SDK. There will be a couple of versions of previews which will be improved before the final version is released.
Permissions
In Android M run-time permissions have been introduced. What this means is that the user won't have to accept a wall of permissions when they first install and application, instead they'll be asked to grant permission to that app to access a specific feature only when it's needed, and of course it'll remember the users choice so they'll only have to do it once for each permission for each app.
The user can also go into settings and view and adjust app permissions for all installed applications sorted both by app and also by a particular permission too which is handy. Basically everything users have ever wanted with permissions delivered. This will only be available for apps developed for Android M though. Legacy apps will remain the same, by asking all permissions up front on install, but they can still be adjusted from the phone settings. This means that your app must be tested in M to check all permissions work.
The user can also go into settings and view and adjust app permissions for all installed applications sorted both by app and also by a particular permission too which is handy. Basically everything users have ever wanted with permissions delivered. This will only be available for apps developed for Android M though. Legacy apps will remain the same, by asking all permissions up front on install, but they can still be adjusted from the phone settings. This means that your app must be tested in M to check all permissions work.
Voice Interaction
VoiceInteractor allows you to interact with the voice input system - there was already the capability for the user to launch an intent and use voice, but now with the voice category intent filter, so creating voice interaction request is really easy
Finger Printing
There is a new finger print manager in the API which allows for really easy integration of finger print authentication. Your app will still contain all the UI, and the API does all the work on the backend alternativley you can use the existing keyguard manager to show the lock screen to authenticate the user with their pin, code or whatever they set up.
Backup
In Android M by default all app data will be backed up to the users Google account. The user can opt out of this, as can the developer don't use the <full-backup> tags. Everything else related to user data was already backed up in previous versions of Android and will continue to be.
Google Play Services
Version 7,5 is out, one of the most exciting features of this is GCM network manager - this is every cool way of making sure that your network requests are going to be much more optimal for the device, even for older versions (unlike in L where the app had to have this explicitly written in). There's also a few more new features, like maps available for Android wear devices.
Power
Another focus on M is to improve battery life. A new feature called Doze has been implemented which uses the devices accelarometer to detect if the device has not been moved for a long period of time (like hours to days) and exponentially decreases the number of network requests and scheduled app processes until the device is moved. If there are real-time alarms or high priority tasks the device will be woken and they will have priority. As soon as the device is plugged in or moved Doze will turn off. Same will happen with apps you don't use (for a few days or weeks), less resources will be allocated to their background processes.
Data Binding
This idea obviously has been around on other platforms for a while but it's now being integrated with Android. Data binding is the ability to connect the data model to some of the UI elements in the application. Data binding is cool.
UI Features
The Android design support library has been updated with the new material design, with emmbed better practices. And a f-a-b button so developers don't have to create their own circles with a shadow anymore. Updates to recycler view and web view.
Notications
Now notifications can contain a resource id for their icon, this is cool because for the first time you can use a bitmap, you no longer need a million assets for each possible condition, they can now be generated on the go, like downloading an asset from your web back-end.
Text
Better text selection, (use ActionMode.TYPE_FLOATING in your code) for a floating pallet of icons, that'll be easier for users. The improved ability to process text. Also better formatted text, finally.
App Links
In Android M will now be able to distinguish between app and web links made by the same developer securely. Very simple to set up in manifest and on server with auto verify certificates and app links, so links to those urls will open in your app. As with everything else in M the user can take controll and modify this in settings.
Direct Share
Allowing the user to share more easily. Provide information with intent filter, with a certain intent return list of possible targets
Better Stylus Support
Stylus support has been around for a while, but it has been greatly improved in M. You can create a datastream to receive preasure and button data over BLE protocol Android can fuse this with touch data on the glass, what this means is you can build a bluetooth stylus that then records as a stylus on any app on M that supports stylus - no special hardware, no app modifications, no nothing complicated, so interrogating stylus data = very easy.
There will also be some motion events added in M to help deal with stylus'es like ACTION_BUTTON_RELEASE , BUTTON_STYLUS_SECONDRY.
Graphics and Media
RenderScript Compute:
BLAS intrinsics (... really big matrix'es)
Allocationless launches (size of kernal seperate from data)
ScriptGroup (more dependancy types, better compiler optimisations)
Also imporovements to camera API, even better flash light no longer linked to camera
Alpha Optimisation
MIDI
android.media.midi package has been introduced. You could already do MIDI, if you did it manualy. Now this new package will give the developer the bytestring that will be able to send and receive the not information.
Higher Res Audio
Now single precision float (rather the 16-bit sample of before) Multil chanel UB digital audio
Android Studio 1.3
Integrated testing support, tooling, new language support, more support for vector drawables, Android ndk development. Also Systray has been cleaned up.
Introduction to react.js
React is a JavaScript framework built at Facebook, it was built to answer the question "How should we structure JavaScript applications".
There are a lot of JavaScript frameworks that try to answer this question, most of them are MVC based (or MVVM or MVW) - basically they're all based around models - which are just observable objects that have some events api that allows you to subscribe to some changes on that object. So developers set up bi-directional data-binding that allow you to subscribe to changes on you r model, so whenever something changes you can mutate and update your view.
React is a JavaScript library for building user interfaces, you get all the good parts of a complete render, but without the downsides such as performance and loss of data.
At the heart of react is, declarative components - describing what components look at at any point in time
There are a lot of JavaScript frameworks that try to answer this question, most of them are MVC based (or MVVM or MVW) - basically they're all based around models - which are just observable objects that have some events api that allows you to subscribe to some changes on that object. So developers set up bi-directional data-binding that allow you to subscribe to changes on you r model, so whenever something changes you can mutate and update your view.
React is a JavaScript library for building user interfaces, you get all the good parts of a complete render, but without the downsides such as performance and loss of data.
At the heart of react is, declarative components - describing what components look at at any point in time
Initial Render
There is no explicit data binding, in react we just define one render function, and the purpose of this render function is to describe what your view looks like in any point in time. It returns a representation of your view. We recursive call render to build up this hierarchy. When we want to generate the mark-up of this representation for the first time, we take the representation and iterate over it generate a string and inject it into the document. This does something called two-pass rendering which is generating the string, then later, after the string is injected into the document we attach the event handlers at the top-level, which exposes some really interesting opportunities, since your generating your string somewhere separate from where your hooking up your events, you can render on the server.Update Rendering
Instead of mutation for updating react uses a process called reconciliation, the purpose of this is to keep you UI up-to-date as your data changes, automatically updates your views and DOM. The render function that does the initial rendering and returns a string representation of what our components should look like at that point in time, and react compares that with the current DOM and finds all the differences, based on those differences creates some DOM representations of just the relevant parts and updates the view.Building DOM Representations
Since the HTML is defined in JavaScript it would get a bit hard to understand for larger pages with a lot of nesting, there would be curly braces everywhere, so for that reason JSX syntax is used to define the elements. This is very similar to other templating engines and uses ordinary HTML-type syntax.
This post is based on information given by Tom Occhino from Facebook on his series about react.js
How to create a web service to send emails for you Android, iOS or web application
Since it's a common task to have to send emails from your app, this post outlines the quickest way to get a mail service up and running using server side JavaScript, Parse and Mandrill. No JavaScript or web coding experience is needed.
2. Download https://www.parse.com/downloads/windows/console/parse.zip
3. Extract the zip
4. Run parse console
5. cd into your working directory
6. run the command parse new <name-of-project>
7. make changes to your code if you like
8. run the command parse deploy
done
Set up Parse
1. Go to parse.com and create a cloud code app following the process2. Download https://www.parse.com/downloads/windows/console/parse.zip
3. Extract the zip
4. Run parse console
5. cd into your working directory
6. run the command parse new <name-of-project>
7. make changes to your code if you like
8. run the command parse deploy
done
Set up Mandrill
Mandrill is a is an email infrastructure service by MailChimp. It's free to use up to a limit of 12,000 emails per month (and 250 per hour) and it's easy to set up.
Head over to https://mandrill.com/ and sign up to get an API key.
The Code
Inside your new Parse project, there should be a folder called cloud, cd into that and create a file called main.js (if it doesn't already exist).
Paste the following code into cloud/main.js
Parse.Cloud.define("sendMail", function(request, response) { var Mandrill = require('mandrill'); Mandrill.initialize('<Manddrill_api_key>'); Mandrill.sendEmail({ message: { text: request.params.text, subject: request.params.subject, from_email: request.params.fromEmail, from_name: request.params.fromName, to: [{ email: request.params.toEmail, name: request.params.toName }] }, async: true }, { success: function(httpResponse) { console.log(httpResponse); response.success("Email sent!"); }, error: function(httpResponse) { console.error(httpResponse); response.error("ERROR - mail failed to send"); } }); });
And change the <api key> to your Mandrill API key (obviously withoiut the pointy brackets)
Once this id done, run parse deploy to push your work to parse.
Calling your service
Below is all the paramaters you'll need to send emails from your application.
URL:
https://api.parse.com/1/functions/sendMail
URL Parameter Key
|
Value
|
Content-Type
|
application/json
|
Accept
|
application/json
|
X-Parse-Application-Id
|
<Your_parse_application_id>
|
X-Parse-REST-API-Key
|
<Your_parse_rest_api_key>
|
Raw JSON body:
{
"toEmail":"someone@hotmail.com",
"toName":"jane doe",
"fromEmail":"someone_else@live.com",
"fromName":"john smith",
"text":"this is the email body for the main message",
"subject":"this is the email subject"
}
You will probably want to test this out before you include it in your app. A good way to do this is to use the PostMan client availible free on the Chrome store (similar versions will be out there for Firefox and Safari).
Fill in the form so that it looks like the image below, and you should see your new email service working :)
Subscribe to:
Posts
(
Atom
)
No comments :
Post a Comment