🎻 July 2022 fiddle
Look us up here for the WebRTC Fiddle of the Month, created… once a month.
Or just enroll to one of our excellent WebRTC training courses.
Tsahi: Hi, and welcome to our WebRTC Fiddle of the Month. This time what we’re going to do is to share part A partial part of the screen. Right, Philip?
Tsahi: Okay, and how exactly do we go about doing that on a conceptual level?
Philip: On a conceptual level, we’re trying to crop the resulting stream, that means we need to crop each frame.
Tsahi: Okay, so I’m going to use getDisplayMedia() as I usually would, that would give me something or what the user wanted to share. And then how do I get that data and have it in a way that I can manipulate it?
Philip: That is done with a new Breakout Box API, or MediaStreamTrackGenerator and MediaStreamTrackOrocessor, which is a Chrome only API right now?
Tsahi: Okay, do you want to guide us through this, then?
Philip: Yes. So we start at the bottom. And the first thing we do is we check if the API is available. If it isn’t, then it doesn’t make sense to continue with the fiddle. And in production, you can’t offer such a feature to any users.
And the next thing we do is with a drop down, which we’ll enable if this works, and this drop down, is having an onchange. So if we change the drop down, it will change the value of the cropmode variable. And we check the value later in the code.
Then we have our run() function, which is made some function and we can’t use a top level yet, so we need to put it into an async function. And this stream function, fairly simple. The first thing it does is it calls getDisplayM edia, which is a video stream, which usually shows the selector API’s and is in Chrome. And we get to track from that, which is a video track. And then we create a MediaStreamTrackGenerator from that track.
Tsahi: Okay, so this is where the breakout box comes in. Right?
Tsahi: You call getDisplayMedia, the user decided what to share in your case, I see that you decided to share a Chrome tab, probably.
Philip: Chrome window.
Tsahi: Chrome window. And then we call MediaStreamTrackGenerator, which is essentially the breakout box.
Philip: And we’re currently doing a local example so we don’t attach it to a peer connection.
Tsahi: Okay, but I can use the stream that it generates, the media stream that it generates and shove it into a pair connection to someone else if I want.
Philip: That makes it a bit more complicated than the simple fiddle we have, but we try to keep them small.
Philip: Yes, and we set the display the source object to the media stream based on the generator.
Philip: And the next thing we have is a MediaStreamTrackProcessor, which is getting the track as an input argument. So it is taking the… or it is processing the output from the track.
Tsahi: So this is a transform function.
Philip: Yes. The processors are readable events and that will go through a transform stream, which is calling A transform function on every single frame.
Philip: Then it goes into the generators rewritable. So it’s creating a media pipeline engine.
Tsahi: Okay. So, let’s say you go to the drop down on the right and just let’s choose something.
Philip: Yes, let’s do the upper left quarter. So we see the size changed, and the visible viewport changed as well.
Philip: So let’s look at how we do that. We have this transform function which gets the frame and the controller; controller is where you reinsert the frame after you’re done with processing it. And evaluate the cropmode. And if the cropmode is upper left, that means we are creating a new frame with a reduced visible rectangle, which is half the size in width and height of the existing frame.
Philip: And for the similar case of upper right, we would take the same width and height divide it by two, and we would choose new X & Y coordinates for the offset. There’s a slight problem with that and it doesn’t work in the example I have here because the offset needs to be aligned with the pixel format. So you can choose freely but need to use multiples of I think 60 for that.
Tsahi: Okay. By the way, it worked for me when I played with it on the full screen.
Tsahi: So when I shared the full screen and played with it, I saw the fiddle on the page. And I could see that either the top right left corner of it or the top right one, which is really neat.
Philip: Yes, it really depends on the size and the window size is an odd number to start with. If you could show the screen once more.
Philip: And we also have the default case, which is to do nothing, no cropping. So we just create a new video frame based on the old frame.
Philip: And then we enqueue the frame into the pipeline and close the old fame.
Tsahi: Just a question out of curiosity; couldn’t we just enqueue the old frame in instead of creating a new one from the old one?
Philip: I tried that. But that got more complicated with closing the frame. So it was easier to just create a new frame as something based on the existing frame.
Tsahi: Okay. Okay.
Philip: And in terms of memory alignment, it’s the same. It’s the same memory back end.
Tsahi: Okay, so it’s just a pointer to an object?
Tsahi: Okay. So let’s sum things up:
Philip: Yes. For WebRTC has to do a smart gallery feature.
Tsahi: Yep. So that’s… so next we did the getDisplayMedia or getUserMedia. From here:
Tsahi: Okay. So thank you for that. Until next month, in our next fiddle of the month.
Philip: Yes. Bye.