🎻 December 2020 fiddle
Look us up here for the WebRTC Fiddle of the Month, created… once a month.
Or just enroll to one of our excellent WebRTC training courses.
Tsahi: Hi everyone, and welcome to the WebRTC Fiddle of the Month.
This time, what we’re going to talk about is AddStream, AddTrack and AddTranceiver. This means how the hell do you add media channels into WebRTC sessions? There are different ways to do that. Three of them, to be exact. We are going to figure out what to do. So first things first, let’s see what are they exactly and how they are used. So let me share quickly.
OK, so what we have here is the statistics of Chrome, of using these APIs, and you can see that AddStream. Is like in 0.5% of the page loads on the Internet today. There was a huge growth due to the pandemic this year, which is to be expected. Now, if you go and check the others.
So AddTrack, for example, we see again growth, but somehow it’s less. It doesn’t come to the same numbers. one is zero 0.5%. This one is almost 0.1%. And then we’ve got AddTransceiver, which is, again, less than 0.1%. So somehow we’ve got three APIs: AddStream has the most amount of use. AddTrack and AddTranceiver have 0.08%. Both of them or each one of them.
So what do you make of that and which one should we be using? So Philipp, to you, right.
Philipp: So we have these three APIs and yes, the question is which one to use? And AddStream is, as far as education is concerned, a legacy API. I think Safari does not even have the native implementation, even though Adapter.js is shimming it. So what we are going to show you first is how to migrate your code from AddStream to AddTrack.
Tsahi: OK, so our first suggestion is don’t use AddStream and if you are using it, migrate away from it because someday it won’t be there.
Philipp: Well, maybe in the next decade.
So we have two fiddles.
So hopefully having the right one, so we do the usual thing, we create a PeerConnection, we wire up the ICE candidates, we listen to the onnegotiationneeded event, and then we have an asynchronous function that is going to acquire a stream and then use either AddStream or AddTrack.
Then it does the normal negotiation, we do with a PeerConnection, and then we enable a button to remove a track from the stream. The original stream we got from getUserMedia() because getUserMedia() is still operating the streams. But we switched the PeerConnection API to operate on tracks instead of streams.
This code has actually run once and we can see onnegotiationneeded fires once in both Chrome and Firefox
Tsahi: OK, so the left side is Firefox, the right side is Chrome for the exact same jsFiddle.
Philipp: If we remove AddTrack from the original stream in Firefox, nothing happens, which is fine.
So we can just instead of calling AddStream, we have this stream getTracks() for each track, call addTrack() for a track in the stream, and that behaves exactly the same in Firefox.
Tsahi: OK, so what we did now with what we did now is when to getUserMedia(), which got a stream from that. Instead of doing AddStream, we iterate across all of the tracks in the stream and add them individually one by one into our PeerConnection. And it worked well, and we know that because the label ONN is there on the onnegotiationneeded callback.
Tsahi: OK. Let’s go to the Chrome one.
Philipp: If we do the same. We’re on the same code, it behaves the same so far. But if we remove the video track from the original stream from getUserMedia(), for example, because we want to replace it locally with screen share, oh, onnegotiationneeded triggers.
that is because Chrome has a kind of mixed approach to this API where they said, OK.
We want to remove video from the peer connection that we are operating on streams, so we are listening to the streams events and then trigger events in the peer connection.
Tsahi: OK, so what you’re saying that if I if I build an application where the video or audio need to change dynamically: being added, removed, changed, replaced, screen sharing, muting, large conferences, all of these things – I’d better just use AddTrack because Chrome behaves differently if I start with AddStream and then need to play with individual tracks inside the stream.
Philipp: Yes. So it is more simple. Chrome behavior here, I think it’s a bug, but it has been there for a long time and it made sense in the AddStream only world.
Mm hmm. OK.
Tsahi: So what we need to do is to switch or replace our code from AddStream to AddTrack in order to make it work better, especially if we’re changing dynamic the tracks.
Yes. So that is the easy part. And there’s even a code that I wrote which will automatically transform your code from AddStream to AddTrack, it’s a mechanical operation.
That is AddStream versus AddTrack.
Philipp: Now let’s look at AddTrack versus AddTransceiver, which is our second fiddle.
Tsahi: So we had AddStream at the beginning, then we WebRTC introduced AddTrack and then it introduced AddTransceiver.
Tsahi: So AddTransceiver is the latest and greatest.
Philipp: I would say is that AddTransceiver is the latest. And we have not removed AddTrack from the specification. AddTransceiver gives you more control over the SDP because if you add a transceiver, you actually add a m-line in the SDP. But that can have some unexpected consequences if you’re using it. We’re going to show it our fiddle:
So it’s a bit more simple PeerConnection, again, onicecandidate, we are listening to onnegotiationneeded of both connections now.
And then we do the usual thing, we get a stream from getUserMedia(). We use AddTrack on the first peer connection to add the track.
Tsahi: Ok, so again, instead of using AddStream, we switched here to AddTrack and the only difference that is there that we iterate and shove it into the peer connection anyway.
Philipp: Yes. And then we call a AddTransceiver on the second peer connection.
Tsahi: OK, why on the second one and not on the first one?
Philipp: Because I want to show a very specific behavior.
We do that. Then we create an offer which will have two m-lines, one for audio and one for video. And we call setRemoteDescription() on the second peer connection. And then we call setLocalDescription, createAnswer, setRemoteDescription, setLocalDescription, and then we say we’re done with negotiation.
If we look at the events fired – the onnegotiationneeded, we see PC2. That is from here. PC1 that is from here. The order is a bit confusing on Firefox.
Then we’re down here done with negotiation. But then we get onnegotiationneeded again, and why is that?
Tsahi: Why is that?
Philipp: Because we have created a transceiver and that transceiver is not matched to the incoming transceivers. So we have three transceivers: our local radio transceiver, which we created and we have the incoming audio and video transceivers. We have three, and if we added on the second peer connection a video track to the connection to the transceiver and the transceiver sender, without another renegotiation, it would not be sent back.
That is quite unexpected, right?
Tsahi: OK, I can’t say that understand transceivers too much, but OK.
Philipp: It’s complicated. So these transceiver API and this gives you a lot more control above the SDP, but AddTrack tries to be a bit user friendly where it puts the tracks in the SDP. But AddTransceiver just gives you the raw access. So what is better depends on your application.
Tsahi: We tried to talk about it, to talk about it earlier, and you couldn’t quite give me a reason to use AddTransceiver in any case scenario that came to mind immediately, like, no, this is what we need to do.
Philipp: Yes. I mean, if you are building your application fully on AddTransceiver, go for it.
But if your application evolved from AddStream and AddTrack it is very hard to make use of AddTransceiver, I would say.
Tsahi: OK, and your suggestion for someone starting out is to not use AddStream because it’s old and useless or it will work, but sometimes when it might break in the future and use AddTrack because it’s simpler than AddTranceiver, it generally does everything you need to anyway.
Tsahi: OK, so thank you. And we’ll see you all in our jsFiddle next month.
Philipp: Bye bye.