目录
  1. 1. V4L2(Video for Linux two )流程
  2. 2. 移植Qt的具体代码
Video for Linux two

Linux操作系统内核的采用的是V4L2视频设备内核驱动。该框架是在V4L的基础上进行扩展升级。视频的采集使用流水线的方式传输,通过使用V4L2的视频采集框架流程,在内核空间申请出缓存,在主线程下,图像的每一帧都向缓存空间传输。

V4L2(Video for Linux two )流程

在将QT+opencv 移植到Arm上面的时候发现不能直接用QT或者是Opencv提供的库函数进行视频采集,从网上查得发现,还需要将GTK的库也移植到板子上去,好像实施的可能性不大,(网上说的非常复杂,并且得不偿失),这里可以使用V4l2 ,他给linux下的视频设备提供了一套接口规范。从Linux2.5之后都默认有这个开发接口。(可以看下/usr/include/linux/下面是否有videodev2.h)

可以在V4L2官网下载英文版的手册

CSDN 有中文版的

下面的我们需要常用的结构体和宏在进行了总结,都可以在

1
/usr/include/linux/videodev.h

查找的到

  • V4L2的常用结构体介绍()
1
2
3
4
5
6
7
struct v4l2_requestbuffers        //申请帧缓冲,对应命令VIDIOC_REQBUFS 
struct v4l2_capability //视频设备的功能,对应命令VIDIOC_QUERYCAP struct v4l2_input //视频输入信息,对应命令VIDIOC_ENUMINPUT
struct v4l2_standard //视频的制式,比如PALNTSC,对应命令 VIDIOC_ENUMSTD
struct v4l2format //帧的格式,对应命令VIDIOC_GFMTVIDIOC_S_FMT
struct v4l2_buffer //驱动中的一帧图像缓存,对应命令VIDIOC_QUERYBUF
struct v4l2_crop //视频信号矩形边框
v4l2_std_id //视频制式
  • V4L2常用的宏

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    VIDIOC_REQBUFS //分配内存 
    VIDIOC_QUERYBUF //把VIDIOC_REQBUFS中分配的数据缓存转换成物理地址
    VIDIOC_QUERYCAP //查询驱动功能
    VIDIOC_ENUM_FMT //获取当前驱动支持的视频格式
    VIDIOC_S_FMT //设置当前驱动的频捕获格式
    VIDIOC_G_FMT //读取当前驱动的频捕获格式
    VIDIOC_TRY_FMT //验证当前驱动的显示格式
    VIDIOC_CROPCAP //查询驱动的修剪能力
    VIDIOC_S_CROP //设置视频信号的矩形边框
    VIDIOC_G_CROP //读取视频信号的矩形边框
    VIDIOC_QBUF //把数据从缓存中读取出来
    VIDIOC_DQBUF //把数据放回缓存队列
    VIDIOC_STREAMON //开始视频显示函数
    VIDIOC_STREAMOFF //结束视频显示函数
    VIDIOC_QUERYSTD //检查当前视频设备支持的标准,例如PAL或NTSC。
  • 看了网上好多博主写的对V4L2的流程分析,总结了下,先理清楚流程,然后再进行代码的更改和搬移(这个流程是基于视频的采集)

    • 打开设备文件/dev/video* ,linux下一切皆文件。open函数提供了阻塞/非阻塞的方式进行打开,

    • 打开这个文件之后,我们就要看看,这个文件有什么功能 ,视频输入,采集,音频输入输出等。

    • 视频信号帧的裁剪,这里涉及到两个结构体

      1
      struct v4l2_crop        //视频信号矩形边框
      1
      struct v4l2_cropcap   //可裁剪区域描述

      v4l2_cropcap 这个结构体就是限定了,我们可以裁剪的区域范围大小

      我们通过对结构体v4l2_crop,赋值,来进行裁剪

    • 重新设置帧的格式包括,宽度和高度等

    • 接下来就是我们需要对帧缓冲区和内存进行分配,因为在驱动模块用户空间和内核空间是分开的,我们如果需要读取内核空间的内容的话,可以通过内存映射,或者是通过特定的函数(read )进行访问,也可以使用用户指针的方式,这里我们使用mmap函数将内核空间的内存映射到用户空间

    • 接下来,初始化完成之后,我们就要开启数据流,这里我们在上面一步会设定缓存数量,不能超过5帧,启动数据流就是说,我们用户取出一帧,设备采集到一帧放入队列

    • 然后将所取到的数据,进行格式转换。,这里数据转换的时候可以将RGB/YUV 转换成MJPEG,MJPEG是运动静止图像压缩技术,可以单独的压缩每一帧图像。生成序列化的运动图像。它就是一种数字压缩格式,只对帧内的空间冗余进行压缩,不对帧间的时间冗余进行压缩,压缩效率低

  • 具体流程图如图:

    移植Qt的具体代码

  • V4L2_QT.cpp

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    248
    249
    250
    251
    252
    253
    254
    255
    256
    257
    258
    259
    260
    261
    262
    263
    264
    265
    266
    267
    268
    269
    270
    271
    272
    273
    274
    275
    276
    277
    278
    279
    280
    281
    282
    283
    284
    285
    286
    287
    288
    289
    290
    291
    292
    293
    294
    295
    296
    297
    298
    299
    300
    301
    302
    303
    304
    305
    306
    307
    308
    309
    310
    311
    312
    313
    314
    315
    316
    317
    318
    319
    320
    321
    322
    323
    324
    325
    326
    327
    328
    329
    330
    331
    332
    333
    334
    335
    336
    337
    338
    339
    #include "v4l2_qt.h"
    #include "ui_v4l2_qt.h"
    #include "v4l2_heade.h"
    V4L2_QT::V4L2_QT(QWidget *parent) :
    QMainWindow(parent),
    ui(new Ui::V4L2_QT)
    {
    ui->setupUi(this);
    Camera_fd = -1;
    buffers = NULL;
    n_buffers = 0;
    device = "/dev/video0";//linux
    device_2 = "/dev/video4";//
    pixel_format = V4L2_PIX_FMT_YUYV; //视频格式为MJPEG
    pre_w = WIDTH; //预览窗口w
    pre_h = HEIGHT; //预览窗口h
    timer=new QTimer(this);
    frame=new QImage(rgb,WIDTH,HEIGHT,QImage::Format_RGB888);
    this->setMaximumSize(640,480);
    this->setMinimumSize(640,480);
    ui->label->setMaximumSize(640,480);
    ui->label->setMinimumSize(640,480);

    readCamera(); //camera 初始化,开启视频流
    connect(timer,SIGNAL(timeout()),this,SLOT(post_preview())); //将视频流post到preview窗口中,实现预览
    timer->start(100);
    }

    V4L2_QT::~V4L2_QT()
    {
    delete ui;
    releaseCamera();
    }
    int V4L2_QT::xioctl(int fd, int request, void *arg)
    {
    int r;

    do
    r = ioctl(fd, request, arg);
    while (-1 == r && EINTR == errno);

    return r;
    }
    void V4L2_QT::errno_exit(const char *s)
    {
    fprintf(stderr, "%s error %d, %s\n", s, errno, strerror(errno));
    exit(EXIT_FAILURE);
    }
    int V4L2_QT::opendevice(char*device)

    {
    Camera_fd=open(device,O_RDWR);
    if(Camera_fd<0)
    {
    fprintf(stderr,"Cann't Open %s,%d,%s/n",device,errno ,strerror(errno));
    return errno;
    }
    else
    {
    PRINTK("OPen video Success\n");
    return 1;
    }
    }

    //Memory Requset

    void V4L2_QT::init_mmap()

    {
    struct v4l2_requestbuffers reqbufs;//向驱动申请帧缓冲的请求,里面包含申请的个数
    CLEAR(reqbufs);//clear
    reqbufs.count=4;//缓存数量,也就是说在缓存队列里保持4张照片
    reqbufs.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;//数据流格式
    reqbufs.memory = V4L2_MEMORY_MMAP;
    if(-1==xioctl(Camera_fd,VIDIOC_REQBUFS,&reqbufs))
    {
    if (EINVAL == errno){
    fprintf(stderr, "%s does not support memory mapping\n", device);
    exit(EXIT_FAILURE);
    }
    else{
    errno_exit("VIDIOC_REQBUFS");

    }

    }
    if (reqbufs.count < 2)
    {
    fprintf(stderr, "Insufficient buffer memory on %s\n", device);
    exit(EXIT_FAILURE);
    }
    buffers = (struct buffer *)calloc(reqbufs.count, sizeof(*buffers));
    if (!buffers)
    {
    fprintf(stderr, "Out of memory\n");
    exit(EXIT_FAILURE);
    }
    for (n_buffers = 0; n_buffers < reqbufs.count; ++n_buffers)
    {
    struct v4l2_buffer buf;

    CLEAR(buf);

    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    buf.index = n_buffers;

    if (-1 == xioctl(Camera_fd, VIDIOC_QUERYBUF, &buf))
    errno_exit("VIDIOC_QUERYBUF");

    buffers[n_buffers].length=buf.length;
    buffers[n_buffers].start = (unsigned char *)mmap(NULL,buf.length,PROT_READ | PROT_WRITE ,MAP_SHARED ,Camera_fd, buf.m.offset);
    if (MAP_FAILED == buffers[n_buffers].start)
    errno_exit("mmap");
    }
    PRINTK("init_mmap Success\n");
    }

    int V4L2_QT::init_device(unsigned int w, unsigned int h)

    {
    struct v4l2_capability cap;//这个设备的功能,比如是否是视频输入设备
    struct v4l2_format fmt;//帧的格式,比如宽度,高度等
    struct v4l2_cropcap cropcap; //输入设备裁剪
    struct v4l2_crop crop; //输入设备裁剪
    if(xioctl(Camera_fd,VIDIOC_QUERYCAP,&cap))//Check vedio input
    {
    if(EINVAL==errno)
    {
    fprintf(stderr,"%s is no V4l2 device \n",device);
    return(EXIT_FAILURE);
    }
    else
    {
    errno_exit("VIDIOC_QUERYCAP");
    }
    }
    if(!(cap.capabilities&V4L2_CAP_VIDEO_CAPTURE))
    {
    fprintf(stderr, "no video capture device\n");
    exit(EXIT_FAILURE);
    }
    cropcap.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if(-1==xioctl(Camera_fd,VIDIOC_CROPCAP,&cropcap))
    errno_exit("VIDIOC_CROPCA");
    crop.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    crop.c = cropcap.defrect;
    if (-1 == xioctl(Camera_fd, VIDIOC_S_CROP, &crop))
    {
    switch (errno)
    {
    case EINVAL:
    break;
    default:
    break;
    }
    }
    CLEAR(fmt);//清除帧的格式
    //重新设置帧的格式
    fmt.type=V4L2_BUF_TYPE_VIDEO_CAPTURE;// 数据流类型,必须永远是//V4L2_BUF_TYPE_VIDEO_CAPTURE
    fmt.fmt.pix.width=w;
    fmt.fmt.pix.pixelformat = pixel_format;
    fmt.fmt.pix.field = V4L2_FIELD_ANY;

    if (-1 == xioctl(Camera_fd, VIDIOC_S_FMT, &fmt))//把刚才的帧格式写入
    errno_exit("VIDIOC_S_FMT");
    if ((fmt.fmt.pix.width != w) || (fmt.fmt.pix.height != h))//判断是否写入
    {
    qWarning(" Frame size: %ux%u (requested size %ux%u is not supported by device)\n",
    fmt.fmt.pix.width, fmt.fmt.pix.height, w, h);
    w = fmt.fmt.pix.width;
    h = fmt.fmt.pix.height;
    }
    else {
    qWarning(" Frame size: %dx%d\n", w, h);
    }
    PRINTK("init_device Success\n");
    //配置完成之后,接下来进行内存分配
    init_mmap();
    }

    void V4L2_QT::readCamera()

    {
    if(opendevice(device)){
    init_device(pre_w,pre_h);
    stream_on();
    }
    else{
    opendevice(device_2);
    init_device(pre_w,pre_h);
    stream_on();
    }
    PRINTK("readCamera Success\n");
    }
    void V4L2_QT::process_image(unsigned char *buf, int size)
    {
    qDebug()<<size<<endl;
    PRINTK("process_image get in");
    //showPicData(buf, size);
    convertMJPEG2Mat(buf);
    PRINTK("yuv-->rgb Success out \n");

    //convertMJPEG2Mat(buf);
    }
    void V4L2_QT::stream_on()
    {
    unsigned int i;
    enum v4l2_buf_type type;
    struct v4l2_buffer buf;//代表驱动中的一帧
    for(i=0;i<n_buffers;++i)
    {
    CLEAR(buf);
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    buf.index=i;
    if (-1 == xioctl(Camera_fd, VIDIOC_QBUF, &buf))
    errno_exit("VIDIOC_QBUF");
    }
    type = V4L2_BUF_TYPE_VIDEO_CAPTURE;

    if (-1 == xioctl(Camera_fd, VIDIOC_STREAMON, &type))
    errno_exit("VIDIOC_STREAMON");
    PRINTK("stream_on Success\n");
    }

    void V4L2_QT::post_preview()

    {
    struct v4l2_buffer buf;
    CLEAR(buf);
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    if (-1 == xioctl(Camera_fd, VIDIOC_DQBUF, &buf))
    errno_exit("VIDIOC_DQBUF");
    assert(buf.index < n_buffers);
    PRINTK("post_preview--->process_images\n");
    process_image(buffers[buf.index].start, buf.bytesused);
    PRINTK("process_image Sucess\n");

    v4l2_buffer queue_buf;
    CLEAR(queue_buf);
    queue_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    queue_buf.memory = V4L2_MEMORY_MMAP;
    queue_buf.index =buf.index;
    if (-1 == xioctl(Camera_fd, VIDIOC_QBUF, &queue_buf))
    errno_exit("VIDIOC_QBUF");
    PRINTK("number=%d\n",number++);
    }
    void V4L2_QT::releaseCamera()

    {
    unsigned int i;
    for (i = 0; i < n_buffers; ++i)
    {
    munmap(buffers[i].start, buffers[i].length);
    }
    free(buffers);
    if (-1 == ::close(Camera_fd))
    errno_exit("close");
    Camera_fd = -1;
    timer->stop();
    }
    int V4L2_QT::convert_yuv_to_rgb_pixel(int y, int u, int v) //
    {
    unsigned int pixel32 = 0;
    unsigned char *pixel = (unsigned char *)&pixel32;
    int r, g, b;
    r = y + (1.370705 * (v-128));
    g = y - (0.698001 * (v-128)) - (0.337633 * (u-128));
    b = y + (1.732446 * (u-128));
    if(r > 255) r = 255;
    if(g > 255) g = 255;
    if(b > 255) b = 255;
    if(r < 0) r = 0;
    if(g < 0) g = 0;
    if(b < 0) b = 0;
    pixel[0] = r * 220 / 256;
    pixel[1] = g * 220 / 256;
    pixel[2] = b * 220 / 256;

    return pixel32;
    }

    //yuv422תRGB24

    int V4L2_QT::convert_yuv_to_rgb_buffer(unsigned char *yuv, unsigned char *rgb, unsigned int width, unsigned int height) //Êý×é ÏñËØÑÕɫת»»yuv ת»»rgb

    {
    unsigned int in;
    unsigned int out = 0;
    unsigned int pixel_16=0;
    unsigned char pixel_24[3];
    unsigned int pixel32;
    int y0, u, y1, v;

    for(in = 0; in < width * height * 2; in += 4)
    {
    pixel_16 = yuv[in + 3] << 24 |
    yuv[in + 2] << 16 |
    yuv[in + 1] << 8 |
    yuv[in + 0];
    y0 = (pixel_16 & 0x000000ff);
    u = (pixel_16 & 0x0000ff00) >> 8;
    y1 = (pixel_16 & 0x00ff0000) >> 16;
    v = (pixel_16 & 0xff000000) >> 24;

    pixel32 = convert_yuv_to_rgb_pixel(y0, u, v);//YUV UV·ÖÁ¿×÷Óò»Ã÷ÏÔ ŒõСUV·ÖÁ¿ ÔÚŒÆËãRGBµÄʱºòÓÃÁÙœüµÄŽúÌæ
    pixel_24[0] = (pixel32 & 0x000000ff);
    pixel_24[1] = (pixel32 & 0x0000ff00) >> 8;
    pixel_24[2] = (pixel32 & 0x00ff0000) >> 16;
    rgb[out++] = pixel_24[0];
    rgb[out++] = pixel_24[1];
    rgb[out++] = pixel_24[2];

    pixel32 = convert_yuv_to_rgb_pixel(y1, u, v);//YUV UV·ÖÁ¿×÷Óò»Ã÷ÏÔ ŒõСUV·ÖÁ¿ ÔÚŒÆËãRGBµÄʱºòÓÃÁÙœüµÄŽúÌæ
    pixel_24[0] = (pixel32 & 0x000000ff);
    pixel_24[1] = (pixel32 & 0x0000ff00) >> 8;
    pixel_24[2] = (pixel32 & 0x00ff0000) >> 16;
    rgb[out++] = pixel_24[0];
    rgb[out++] = pixel_24[1];
    rgb[out++] = pixel_24[2];
    }
    return 0;
    }

    void V4L2_QT::convertMJPEG2Mat(unsigned char*mjpeg)

    {

    convert_yuv_to_rgb_buffer(mjpeg,rgb,WIDTH,HEIGHT);
    pixmap = QPixmap::fromImage(*frame);
    ui->label->setPixmap(pixmap);
    PRINTK("yuv-->rgb Success\n");
    //sprintf(ImageName,"ImageName%04ld.jpg",ImageNum++);

    //imwrite(ImageName,RGBImage);

    }
  • V4L2_QT.h

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    ifndef V4L2_QT_H

    define V4L2_QT_H

    include <QMainWindow>

    include <QDebug>

    include <QLabel>

    include<QWidget>

    include<QPixmap>

    include<QLabel>

    include<QPainter>

    include<QTimer>

    include <QWidget>

    include <opencv2/highgui/highgui.hpp>

    include <opencv2/core/core.hpp>

    include <qtimer.h>

    include <opencv2/highgui/highgui_c.h>

    //Kernel Header file

    include <stdio.h>

    include <stdlib.h>

    include <linux/fs.h>

    include <linux/stat.h>

    include <linux/types.h>

    include <linux/videodev2.h>

    include <fcntl.h>

    include <errno.h>

    include <sys/time.h>

    include <sys/mman.h> //memory

    include <sys/ioctl.h>

    include <sys/stat.h>

    include <sys/types.h>

    include <assert.h> //invok the assert() function
    namespace Ui {
    class V4L2_QT;
    }
    class V4L2_QT : public QMainWindow
    Q_OBJECT
    public slots:
    void post_preview();
    public:
    explicit V4L2_QT(QWidget *parent = 0);
    ~V4L2_QT();
    unsigned char rgb[WIDTH*HEIGHT*3];
    private:

    Ui::V4L2_QT *ui;
    struct buffer
    {
    unsigned char *start;
    size_t length;
    };

    QTimer *timer;
    QImage *frame;
    QPixmap pixmap;
    struct buffer *buffers;
    unsigned int n_buffers ;
    int number;
    char *device;
    char *device_2;
    int Camera_fd;
    int index;
    unsigned int pre_w;//预览窗口w
    unsigned int pre_h;//预览窗口h

    int pixel_format;

    int xioctl(int fd, int request, void *arg);
    void errno_exit(const char *s);
    void init_mmap();
    int opendevice(char*device);
    int init_device(unsigned int w, unsigned int h);
    void readCamera();
    void stream_on();
    void releaseCamera();
    void process_image(unsigned char *buf,int size);
    int convert_yuv_to_rgb_pixel(int y, int u, int v);
    int convert_yuv_to_rgb_buffer(unsigned char *yuv, unsigned char *rgb, unsigned int width, unsigned int height);
    void convertMJPEG2Mat(unsigned char*mjpeg);
文章作者: ZhaoH.T
文章链接: http://www.funful.ink/2019/05/25/2019-05-25-V4L2/
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 FunfulBlog

评论