Recording and analysing environmental audio recordings has become a common approach for monitoring the environment. This has several advantages over other approaches, such as reducing costs by avoiding the need for experts to be present in the area of interest. A current problem with performing analyses of environmental recordings is interference from noise that can mask vocalisations of interest. This makes detecting these vocalisations more difficult and can require additional resources. While some work has been done to remove stationary noise from environmental recordings, there has been little effort to remove noise from non-stationary sources, such as rain, wind, engines, and animal vocalisations that are not of interest. This work addresses the challenge of filtering noise from rain and cicada choruses from recordings containing bird sound. The use of acoustic indices and Mel Frequency Cepstral Coefficients (MFCCs) with machine learning classifiers is investigated to find the most effective filters. Hyperparameters for several classification approaches are investigated to fine tune models to achieve the best results. The approach used enables users to set thresholds to increase or decrease the sensitivity of classification, based on the prediction probability outputted by classifiers. A novel approach to remove cicada choruses using bandpass filters is also proposed. A threshold-based approach (Multi-Layer Perceptron with Acoustic Indices and MFCCs) for rain detection is derived which achieves an AUC of 0.9911 and is more accurate than existing approaches when set to the same sensitivities. Cicada choruses are classified in the training set used with 100% accuracy using 10-fold cross-validation using a Support Vector Machine (SVM) classifier with MFCCs. The cicada filtering approach greatly increased the median signal to noise ratios of affected recordings from 0.53 for unfiltered audio to 1.86 to audio filtered by both the cicada filter and a common stationary noise filter.