Look us up here for the WebRTC Fiddle of the Month, created… once a month.
Or just enroll to one of our excellent WebRTC training courses.
Tsahi: Hi everyone and welcome to the WebRTC Fiddle of the Month – the first one for 2021. This time we’re going to talk about codec capabilities. How do you know exactly what type of capabilities or codecs your WebRTC implementation in the browser is supporting now. Do have VP9 there, H.264, AV1… And what are the capabilities inside the codec? What profiles does it support? So Phillip here decided to write something for us. Show us the fiddle Phillip.
Philipp: Yes. Let me show you the screen as we usually do.
So we have a fiddle which runs in Chrome and Firefox this time. We are going to look at 3 ways of getting the information.
One is by using the RTCRtpSender.getCapabilities() method.
The second is a similar message on the RTCRtpReceiver, and the third one is using a peer connection to get that information.
Tsahi: OK, why why do we need these two approaches of get capabilities and looking at the connection?
Philipp: Well, if you look in Firefox, these buttons here RTCRtpSender.getCapabilities() are RTCRtpReceiver.getCapabilities() are not available because these objects and the static methods on them are not available in Firefox yet. So if you support Firefox, you would need to.
So we use a peer connection to discover that information, if you’re interested in that information.
Tsahi: So that’s like I’m going to create a peer connection. I’m going to setLocalDescription() and then look at the SDP, parse the SDP and get and extract the data from there.
Philipp: Yes, we can start with that method if you want.
Philipp: We have a peerconnection onclick handler, which is an asynchronous function. So we created a connection here. We’re adding a transceiver. And we’re making it a sendonly transceiver, which will change the results, so if you do a sendonly receiver, it will only show you codecs that you’re capable of sending. While if you have a receiver only transceiver, you might have different codecs or different codec profiles. That actually happens with VP9.
Then we create an offer.
I suppose so far, and then we’re using my favorite SDP utils to parse the SDP. This line here splits the SDP into the session part and the different media sections. And we are discarding the session part because it doesn’t contain any information about the codec, because it’s just generic stuff for the session. And then we’re iterating over the remaining sections, which is the single section, because we called our addtransceiver once. So we have a single media section.
And from that, we use the helper function parseRtpParameters, which is nice because it parses all the RTP map pipelines and the codecs, and then we are iterating over the codecs and logging for each codec the code name and the parameters. We do that in a format which is similar to the stuff you can get from the getCapabilities function, and then we close the peer connection.
Tsahi: Why did you decide to do it for sendonly not recvonly in the jsFiddle?
Philipp: We can change that and then see if it changes the result. We have here a send only one, and it has VP8, VP9 profile 0, VP9 profile 2, H.264 with different profile level IDs and different packetization modes. We have retransmission RTX associated with those codecs and we have RED and ULPFEC.
To cover your question, let’s change this to receive only. And I think we need to hit run and using a peer connection and, oh – we’re receiving VP9 profile 0, 2 and 1 now. If we compare that to the as a result, we can only decode VP9 profile 1, which is a certain mode of VP9, but we cannot encode it.
And we can see that from this SDP we created.
Tsahi: So we support 3 VP9 profiles on the decoding and 2 in the encoding.
Philipp: Yes. And one of the issues you have, if you’re sending a offer and you don’t have any codec in the offer that Chrome supports, Chrome will unfortunately not follow the spec and it will throw an error from setRemoteDescription. So you typically need to find out. OK, I want to use H.264 – do I support H.264 first.
For example, on Android, H.264 might not be supported if there’s no hardware decoder, there’s no software decoding fallback.
If we do the same in Firefox, we will get a different result. So we have VP8, VP9, no profiles here for VP9 and we just have a single H.264 profile in 2 periodization models.
Now let’s look at the other methods, which are the sender and receiver capabilities, so it’s much simpler. We just say Codecs is RTCRtpSender.getCapabilities(), video codecs, and then we iterate over that. We log the mime type and the SDP fmpt line, which is the thing that appears in the SDP as well. If we do that, we will see, that we support VP8. We support retransmissions. But it doesn’t say we support retransmissions for this codec, so it’s for all codecs, basically. We see we support 2 VP9 profiles as well as 4 different H.264 profiles.
If you’re on windows, you will see H.264 codecs was different profile level ideas and different packetization modes. You can use this API to determine if you’re supporting a certain profile level.
On the receiver, we see that we get VP9 profile 1 which we don’t have on the sender.
Tsahi: We’ve seen that we sometimes or we probably need to look at which codecs we’re using and be more aware of that. There are two ways to do that.
One of them is to use RTCRtpSender.getCapabilities and RTCRtpReceiver.getCapabilities which is simply an API that goes and gets us the data. And if we can’t access these, we can just go and use the peer connection to create that in the SDP and then parse the SDP.
Philipp: The main difference is that the peer connection is giving you that in an asynchronous way. You need an async function while the getCapability API is synchronous, which comes with its own problems, because apparently there is an issue that the hardware encoders and decoders take time to initialize and if you call getCapabilities too early, it might be that the hardware encoder has not yet announced itself to the WebRTC layer, which means you would not get the information that H.264 it supported while in actually is.
Tsahi: When do they get initialized. What do I need to do on the API level to initialize them?
Philipp: They will initialize on page loads, I think. And that’s the thing. You don’t know when it’s safe to call, getCapabilities and it’s a synchronous function. So what do you do? Do you poll it? Do you call it after one seconds?
Tsahi: I think we will leave off with this question and wait for next month, for our next fiddle.