Recording, editing and optimizing audio for iPhone development

It’s no secret that I have another iPhone app in the works, and that it will be available soon. Although, I have been reluctant to give out a lot of detail simply because, well, I’m a bit paranoid. I will say that it involves Brent and I recording audio for playback in the app, and that it has been interesting working with audio on the iPhone.

The first thing I noticed straight away is that it’s easy to use files that are far too large for the iPhone to playback properly while working with .wav files. I started working with .wav files for two reasons. The first being that I have a ton of them, and that made it easy to drop files in to test with. Second, compressed audio files cannot be overlapped on the iPhone, so if you want them to play simultaneously, .wav files are your Huckleberry.

First, I needed a microphone that wasn’t going to sound like an AM Radio, so I went out (dispatched the wife actually – thanks dear) and picked up a Blue Snowball. As it turns out, the Snowball is every bit as good as the reviews says it is, and I would highly recommend it. With the Snowball plugged in, SoundBooth open, we were all set and the recording commenced. Recording the audio was easy the most enjoyable part of the project so far, once the app comes out, you’ll see why.

When we were finished, I ended up with about two dozen CD quality .wav files, which still needed to be trimmed down. The files ranged in file size from 300k up to about 1.2mb. I trimmed the up, but left them as dual-channel files and tried to get them to run in my app. Needless to say the files choked both my iPod and iPhone and needed to be optimized for them to work. I tried a few things, and didn’t get very far in SoundBooth, so I opened them up in Audacity and removed one of the channels and converted them to mono. I also took the time to trim the ends so there wasn’t a lot of unneeded dead-space. The files are all now between 20k and 120k in size, and still sound as well as they play.

I have two different players in action on a couple of views in the app, and one has a single track playing at a time using AVAudioPlayer. Through digging around in some of Apple’s sample code, and online info (particularly from iPhone Dev SDK) I was able to get it work.

Here’s the .h code:

[code=”C#”]

#define kAccelerationThreshold 1.8
#define kUpdateInterval (1.0/10.f)

#import
#import
#include

@interface TauntViewController : UIViewController {

AVAudioPlayer *_player;

NSArray *soundArray;

BOOL soundEnded;

}

@property (nonatomic, assign) AVAudioPlayer* _player;
@property (nonatomic, retain) NSArray *soundArray;

@property (nonatomic, assign) BOOL soundEnded;

– (IBAction)playTheSound;
– (IBAction)resetSound;

@end[/code]

And the .m file code:

[code=”C#”]

#import “SoundViewController.h”

#import

@implementation SoundViewController

@synthesize _player;

@synthesize soundArray;

@synthesize soundEnded;

– (void)viewWillAppear:(BOOL)animated{

UIAccelerometer *appAccel = [UIAccelerometer sharedAccelerometer];

appAccel.delegate = self;

appAccel.updateInterval = kUpdateInterval;

soundEnded = YES;

// Load the array with the sample file

soundArray = [[NSArray alloc] initWithObjects:@”sound1″, @”sound2″, @”sound3″, @”sound4″, @”sound5″, @”sound6″, nil];

}

– (void)viewDidLoad {

[super viewDidLoad];

}

– (void)startPlayback{

if ([self._player play]){

self._player.delegate = self;

}

else

NSLog(@”Could not play %@\n”, self._player.url);

}

– (void)didReceiveMemoryWarning {

[super didReceiveMemoryWarning];

}

– (void)viewDidUnload {

[_player release];

}

– (void)dealloc {

[super dealloc];

}

#pragma mark –

– (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration{

if (sqrt(acceleration.x * acceleration.x) > kAccelerationThreshold || sqrt(acceleration.y * acceleration.y) > kAccelerationThreshold || sqrt(acceleration.z * acceleration.z) > kAccelerationThreshold) {

[self playTheSound];

}

}

– (void)playTheSound{

// play the sound file

if (soundEnded) {

soundEnded = NO;

int soundId = random() % soundArray.count;

NSString *newSound = [soundArray objectAtIndex:soundId];

NSURL *sound1URL = [[NSURL alloc] initFileURLWithPath: [[NSBundle mainBundle] pathForResource:newSound ofType:@”wav”]];

self._player = [[AVAudioPlayer alloc] initWithContentsOfURL:sound1URL error:nil];

[self startPlayback];

}

}

– (void)resetSound{

soundEnded = YES;

}

#pragma mark AVAudioPlayer delegate methods

– (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag{

[self resetSound];

[player setCurrentTime:0.];

}

@end[/code]

This selects a random sound from the soundArray into the player every time you shake the device, so long as there is not a sound currently playing. A couple of things I had to do which I am not sure if it’s the best way to handle it or not is adding the initialization of AppAccel for the Accelerometer in the ViewWillAppear function so that it loads the correct sounds for the view being currently used.

With the optimized sounds, the code works great. I thought about trying the sounds at a higher quality, in stereo, and using MP3s, but I haven’t tried it yet. Honestly I’m not sure I will since the files are so small, they sound quite good, and I’m not sure if there would be anything to gain by using MP3s over the .wav files.

Like I said the app will be out soon, so I hope everyone will like it as much as I do.

%d bloggers like this: