0

I litle interduction to my Project so i have an virtual camera that i build with Directshow, Works how t supposed to work, then i installed an external library uvgRTP and made the code so i receive the frames in a separate thread works fine . i made an queue that puts those frames in a queue and the fillbuffer then proccess those frames. I have an image , infact i have an actualy good video for about 6 seconds and then i get extrem lagg , the RTP frames im receiving are not the problem cus those are tested. im not sure whats the issue im getting a sense that fillbuffer cant quiet handle it after 6 seconds maybe he isnt fast enough but that shouldnt be a problem , does anybody have any experience with Directshow filter virtual cameras that can help me out.

so here are my code for the RTP receiving and sending it to a queue:

void CVCamStream::rtpThreadProc(CVCamStream* obj) {
    constexpr uint16_t LOCAL_PORT = 12480;
    constexpr char LOCAL_ADDRESS[] = "::";
    uvgrtp::context ctx;
    uvgrtp::session* sess = ctx.create_session(LOCAL_ADDRESS);
    int flags = RCE_RECEIVE_ONLY;
    uvgrtp::media_stream* receiver = sess->create_stream(LOCAL_PORT, RTP_FORMAT_H264, flags);

    if (receiver) {
        while (!obj->m_quit_thread) {
            uvgrtp::frame::rtp_frame* frame = receiver->pull_frame();
            if (frame) {
                std::vector<uint8_t> newFrame(frame->payload, frame->payload + frame->payload_len);
                obj->frameBuffer_.Push(newFrame);
                uvgrtp::frame::dealloc_frame(frame);
            }
        }
    }
    if (sess) {
        ctx.destroy_session(sess);
    }
}

lets explain it a litle bit so this is a thread that works as separate program in my project. i got the code from the uvgRTP library examples , i it creates a session ipv6 and then we are abble to pullframes that are beeing send using the uvgrtp::frame::rtp_frame* frame = receiver->pull_frame() method so it pullsframe if we get a frame it makes a vector out of those frames that i use as an image to display it and those frame use the function push so if we receive frames it pushes those frames in my class Framebuffer

class FrameBuffer {
public:
    // Push a new frame onto the queue
    void Push(const std::vector<uint8_t>& frame) {
        // Locks the queue with a mutex to prevent concurrent access issues
        std::unique_lock<std::mutex> lock(mutex_);
        // Adds the new frame to the end of the queue
        queue_.push(frame);
        condVar_.notify_one(); // Notify one waiting thread

        // Debugging: Logs the addition of a new frame and the current queue size
        std::wstringstream wss;
        wss << L"Pushed a frame. Queue size: " << queue_.size() << L"\n";
        OutputDebugString(wss.str().c_str());
    }

    // Pop a frame from the queue
    bool Pop(std::vector<uint8_t>& frame) {
        std::unique_lock<std::mutex> lock(mutex_);
        while (queue_.empty()) {
            condVar_.wait(lock); // Wait until there's a frame
            // Optional: Return false or break if shutting down
        }

        // Once there are at least 100 frames, proceed to pop a frame
        frame = std::move(queue_.front());
        queue_.pop();

        return true;
    }

this is the framebuffer class if we just pay a litle attention it should pop the first frame and add others in queue . i have debug it im receiving frames like this in the debug it looks like this "Pushed a frame. Queue size:10 "Pushed a frame. Queue size:5 "Pushed a frame. Queue size:9 "Pushed a frame. Queue size:12 so it isnt constant the queue size ideal would be queue size 1 cus the fillbuffer doesnt proccess it fast enough uvgrtp::frame::rtp_frame* frame = receiver->pull_frame(); then we have the fillbuffer code :

HRESULT CVCamStream::FillBuffer(IMediaSample* pms) {
    std::vector<uint8_t> frameData;
    frameBuffer_.Pop(frameData);
    long bufferSize = pms->GetSize();
    if (bufferSize < frameData.size()) {
        return E_FAIL;
    }
    BYTE* pData = nullptr;
    pms->GetPointer(&pData);
    memcpy(pData, frameData.data(), frameData.size());
    pms->SetActualDataLength(frameData.size());
    return NOERROR;
}

we have here the fillbuffer this one is a like now a separate thread so we have 2 threads trying to work togheter , the receiving frames using RTP and the processing those frames that the fillbuffer does so looking at the frame queue size it is geting lower and biger but doesnt exeed more then 20frames in queue cus that much can handle the fillbuffer. so the fillbuffer receives those frames from the queue and process it

too add this is the format i use to get my h264 decoder , so this is a example of h264 frames without starter code

HRESULT CVCamStream::GetMediaType(CMediaType* pMediaType)
{
   
    pMediaType->InitMediaType();
    pMediaType->SetType(&MEDIATYPE_Video);
    pMediaType->SetSubtype(&MEDIASUBTYPE_AVC1);
    pMediaType->SetFormatType(&FORMAT_MPEG2_VIDEO);

    pMediaType->bFixedSizeSamples = FALSE;
    pMediaType->bTemporalCompression = TRUE;

    // Our H.264 codec only supports baseline, we can't extract 
    // width and height for other .264 sources. In this case
    // the value will still be 0. Hence just set any value.
    unsigned uiWidth = 1080;
    unsigned uiHeight = 2280;

    MPEG2VIDEOINFO* mpeg2 = (MPEG2VIDEOINFO*)pMediaType->AllocFormatBuffer(sizeof(MPEG2VIDEOINFO) + 80);
    ZeroMemory(mpeg2, sizeof(MPEG2VIDEOINFO) + 80);

    mpeg2->dwProfile = 66;
    mpeg2->dwLevel = 51;
    mpeg2->dwFlags = 4;

   /* byte sps[] = {
            103, 66, 128, 10, 218, 1, 12, 4, 118, 128, 109, 10, 19, 80
    };*/
    byte pps[] = { 104, 206, 6, 242, 1, 2, 3, 4 };

    byte parameters[] = {
        // SPS length and data (unchanged)
        0, 14,
        103, 66, 128, 10, 218, 1, 12, 4, 118, 128, 109, 10, 19, 80,
        // Updated PPS length and new data
        0, 8, // Note the length change here to 8 because of the new PPS size
        104, 206, 6, 242, 1, 2, 3, 4 // New PPS data
    };

    mpeg2->cbSequenceHeader = sizeof(parameters);
    memcpy(mpeg2->dwSequenceHeader, parameters, sizeof(parameters));

    VIDEOINFOHEADER2* pvi2 = &mpeg2->hdr;
    pvi2->bmiHeader.biBitCount = 12;
    pvi2->bmiHeader.biPlanes = 3;
    pvi2->bmiHeader.biWidth = uiWidth;
    pvi2->bmiHeader.biHeight = uiHeight;
    pvi2->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
    pvi2->bmiHeader.biSizeImage = DIBSIZE(pvi2->bmiHeader);
    pvi2->bmiHeader.biCompression = FOURCC("lvca");
    //pvi2->AvgTimePerFrame = m_tFrame;
   // pvi2->AvgTimePerFrame = 100000;
    //pvi2->AvgTimePerFrame = 10000000 / 30;
    //SetRect(&pvi2->rcSource, 0, 0, m_cx, m_cy);
    SetRect(&pvi2->rcSource, 0, 0, uiWidth, uiHeight);
    pvi2->rcTarget = pvi2->rcSource;
    pvi2->dwPictAspectRatioX = uiWidth;
    pvi2->dwPictAspectRatioY = uiHeight;
    //pvi2->dwBitRate = 1500000;


    return NOERROR;

} // GetMediaType
11
  • I don't think you're accurately describing the problem you're experiencing, just keep stating that you're not doing anything wrong and going straight to a couple of codes with no explanation whatsoever
    – vipcxj
    Commented Apr 15 at 8:29
  • Im sorry i wasnt clear is it better now , its my first time posting a quesiton . thank you for the understandment Commented Apr 15 at 8:54
  • I think you'd better explain exactly what you mean by "i get extrem lagg". In order to solve the problem, the most important thing is to understand what the problem is and what the symptoms are in order to treat it.
    – vipcxj
    Commented Apr 15 at 10:55
  • Alright sir , so when i start receiving my frames it is all good untill like i reach 3-6 sec after my video the frames that are beeing proccessed are not acting how they suppose to act. its like frame drops . some frames dont get procesed so i the image is all crappy like trying to run high end game on a low spec pc . Commented Apr 15 at 11:34
  • I have not used uvgRTP. However, RTP is an udp protocol, so the rtp packets you receive may be unordered and there may be packet loss. Of course packet loss is rare when you are using local lan. But unordered packets are still very common. Some rtp libraries will adjust the order for you, some won't. I'm not sure which one uvgRTP is. You can print the value of Sequence number in the received rtp packets to see if it's in order or not
    – vipcxj
    Commented Apr 15 at 12:07

0