今天看啥  ›  专栏  ›  starmier

从CMSampleBufferRef中提取PCM数据

starmier  · 简书  ·  · 2019-06-02 16:44

从CMSampleBufferRef中提取PCM数据

脉冲编码调制,其实是将不规则的模拟信号转换成数字信号,这样就可以通过物理介质存储起来。
而声音也是一种特定频率(20-20000HZ)的模拟信号,也可以通过这种技术转换成数字信号,从而保存下来。
PCM格式,就是录制声音时,保存的最原始的声音数据格式。比如 wav格式的音频,它其实就是给PCM数据流加上一段header数据,就成为了wav格式。而wav格式有时候之所以被称为无损格式,就是因为他保存的是原始pcm数据(也跟采样率和比特率有关)。像我们耳熟能详的那些音频格式,mp3,aac等等,都是有损压缩,为了节约占用空间,在很少损失音效的基础上,进行最大程度的压缩。
所有的音频编码器,都支持pcm编码,而且录制的声音,默认也是PCM格式,所以我们下一步就是要获取录制的PCM数据。

-(NSData *) convertAudioSmapleBufferToPcmData:(CMSampleBufferRef) audioSample{

AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(pcmData));
    
//获取CMBlockBufferRef
    CMBlockBufferRef blockBufferRef = CMSampleBufferGetDataBuffer(pcmData);
//获取pcm数据大小
    size_t length = CMBlockBufferGetDataLength(blockBufferRef);
    
//分配空间
    char buffer[length];
//直接将数据copy至我们自己分配的内存中
    CMBlockBufferCopyDataBytes(blockBufferRef, 0, length, buffer);
    
    if ((inAudioStreamBasicDescription.mFormatFlags & kAudioFormatFlagIsBigEndian) == kAudioFormatFlagIsBigEndian)
    {
        for (int i = 0; i < length; i += 2)
        {
            char tmp = buffer[i];
            buffer[i] = buffer[i+1];
            buffer[i+1] = tmp;
        }
    }
    
    uint32_t ch = inAudioStreamBasicDescription.mChannelsPerFrame;
    uint32_t fs = inAudioStreamBasicDescription.mSampleRate;

    //返回数据
    return [NSData dataWithBytesNoCopy:buffer length:audioDataSize];
}

PCM填充CMSampleBufferRef

根据采样精度我们可以知道一个采样点的数据量,比如16位精度,即一个采样点需要2子节,则有200ms需要的数据量为:

//200ms 采样点数量
NSUInteger samples = self->mSampleRate * 200 * self->mChannelsPerFrame/1000;
//200ms pcm数量量
int len = samples*2;

PCM填充CMSampleBufferRef 代码示例:

- (CMSampleBufferRef)createAudioSampleBuffer:(char*) buf withLen:(int) len withASBD:(AudioStreamBasicDescription) asbd{
    
    AudioBufferList audioData;
    audioData.mNumberBuffers = 1;
    char* tmp = malloc(len);
    memcpy(tmp, buf, len);
    
    audioData.mBuffers[0].mData = tmp;
    audioData.mBuffers[0].mNumberChannels = asbd.mChannelsPerFrame;
    audioData.mBuffers[0].mDataByteSize = len;
    
    
    CMSampleBufferRef buff = NULL;
    CMFormatDescriptionRef format =NULL;
    OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd,0, NULL, 0, NULL, NULL, &format);
    
    if (status) {
        return nil;
    }
    CMSampleTimingInfo timing = {CMTimeMake(asbd.mFramesPerPacket,asbd.mSampleRate), kCMTimeZero, kCMTimeInvalid };
    
    
    status = CMSampleBufferCreate(kCFAllocatorDefault,NULL, false,NULL, NULL, format, (CMItemCount)asbd.mFramesPerPacket,1, &timing, 0,NULL, &buff);
    
    if (status) { //失败
        return nil;
    }
    
    status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,kCFAllocatorDefault,kCFAllocatorDefault,0, &audioData);
    
    if (tmp) {
        free(tmp);
    }
    CFRelease(format);
    
    return buff;
}



原文地址:访问原文地址
快照地址: 访问文章快照