I’ve been reading that both Tensorflow and Pytorch deep learning frameworks are now supported by Apple M1/M2 chips. This means that a deep learning acoustic species classifier should run an order of magnitude faster on these chips than on an Intel or AMD PC (without an Nvidea graphics card). Has anyone tried to use these Apple chips in acoustics yet and, if so, were you able to measure performance increase?
EDIT. Although this question may seem general, there are nuances applying deep learning models to acoustics. For example, there is a whole acoustic processing chain to convert a waveform to an image (usually) which can involve multiple signal processing techniques such as filter or FFT calculations which may or may not be optimised on different chips. In addition, the community often use frameworks such as Ketos or AnimalSpot for developing models which may perform differently depending on whether the developer has optomised for different types of processor. The question is therefore asked in a bioacoustics context and answers ideally will also be focussed on performance increases in bioacoustics applications.