Biodiversity monitoring is a key element of environmental and ecosystem protection planning and policy design. For a large number of animal species that produce vocalisations (such as most birds) acoustic methods are the most effective way for detection, identification and surveying. However, the coverage of global scale areas with expert human surveyors is not a viable option and similar scale limitations apply to the deployment of static recording stations. Furthermore, the processing of the very large data volume corresponding to such global coverage is only possible by the use of automatic recognition methods. The widespread use of audio-enabled mobile devices (such as smartphones and tablet PCs) and the continually increasing processing power available in them provide an ideal way to overcome these limitations. In addition to the ability to implement state-of-the-art machine learning techniques on these devices (thus crowd-sourcing the species identification and surveying process at a global scale) the sophisticated user interface capabilities that they offer allow the human user to provide corrective input to the classification algorithms. This citizen science element can, in turn, be used to better train the machine learning algorithms. We are developing machine learning algorithms that are amenable to implementation on the mobile device platform and we are investigating the efficacy of their use for audio-based bird species identification that incorporates continual training based on human corrective input.