Video pci card: Computer parts, laptops, electronics, and more

Video PCI Cards & Capture Cards

Blackmagic Design

Item# BMD-DLMR4K

The Blackmagic DeckLink Mini Recorder 4K is a low profile PCIe capture card, featuring 6G-SDI and the latest HDMI 2.0a connections so you can record all formats up to 2160p30 on your computer! The SDI and HDMI inputs automatically detect and switch between all video formats. You even get high dynamic range recording and metadata over HDMI, along with support for Rec. 2020 color space. The DeckLink Mini Recorder 4K is perfect for integrating into mobile live…

Our Price: $195.00

Availability: In Stock

Add to Wish List

  • Blackmagic Design

    Item# BMD-BDLKSDI4K

    The Blackmagic Design DeckLink SDI 4K is perfect when you need an SDI only solution but demand the highest quality SDI capture and playback. You get incredible quality 6G-SDI on any PCI Express Mac, Windows or Linux computer. DeckLink SDI 4K is the world’s smallest and most affordable multi rate SD/HD and Ultra HD SDI video card that lets you connect to any SD, HD-SDI, 2K and Ultra HD equipment. You also get RS-422 deck control, internal keying and reference input for a complete solution…

    Our Price: $295.00

    Availability: In Stock

    Add to Wish List

  • Grass Valley

    Item# GVLY-VDA-1002

    The VDA-1002 is an analog video distribution amplifier with a differential looping input and eight outputs. Signal status is controlled from the card edge. Signal status is provided to indicate the input signal presence. The differential input provides rejection of hum and other artifacts on incoming signals.
    The new nine slot-wide VDA-1002-3NRP combined with nine VDA-1002 cards maximize analog video distribution to 70 BNC outputs with internal looping of input signal. Two VDA-1002-3NRP rears (with…

    Our Price: $278.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • Blackmagic Design

    Item# BMD-DLMMHD

    Get broadcast quality HDMI and SDI video playback and monitoring from your PCIe computer! Blackmagic Design DeckLink Mini Monitor HD includes 3G-SDI and HDMI 2. 0 that let you connect to any television, monitor or projector for broadcast monitoring, including the latest high dynamic range (HDR) formats! Works with software such as DaVinci Resolve, Final Cut Pro and more.

    DeckLink Mini Monitor HD features advanced high quality video technology that supports uncompressed and compressed capture…

    Our Price: $129.00

    Availability: In Stock

    Add to Wish List

  • Apantac

    Item# APA-OG-MI-9-SET2

    Apantac OG-Mi-9#-SET-2 is a bundle that includes the OG-Mi-9#-MB & OG-Mi-9#-RMC and Occupies 2 Slots in OpenGear Frame with 11x HDBNc & BNC Cables.
    Features: OGX/3.0 card-based form factor
    Low latency – single fram processing delay
    9 auto-detect 3G(Level A)/HD/SD-SDI inputs
    Low power consumption – 15W
    Ethernet for configuration, dynamic labels & Tallies (TSL)
    Standalone labels, customizable logo
    9 GPIs for tallies, count up/down trigger or preset recall
    Built-in analog/digital. ..

    Our Price: $2,309.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • Our Price: $380.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • FOR-A

    Item# FORA-HVS-100PCI

    FOR-A HVS-100PCI HDMI and VGA terminals have been mounted onto a single card. 2 input channels are possible using both.
    Specifications: HD Mode Resolutions
    1080/59.94p: 1024 x 768/60Hz (XGA), 1280 x 1024/60Hz (SXGA), 1280 x 768/60Hz (WXGA), 1600 x 1200/60Hz (UXGA), 1920 x 1200/60Hz (WUXGA), 1920 x 1080/59.94p (HDTV)
    1080/50p: 1024 x 768/60Hz (XGA)**, 1280 x 1024/60Hz (SXGA)**, 1280 x 768/60Hz (WXGA)**, 1600 x 1200/60Hz (UXGA)**, 1920 x 1200/60Hz (WUXGA)**,…

    Our Price: $1,852.50

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • FOR-A

    Item# FORA-HVS-100PCO

    FOR-A HVS-100PCO HDMI and VGA terminals have been mounted onto a single card. 2 output channels are possible using both.
    Specifications: HD Mode Resolutions
    1080/59.94p: 1280 x 1024/60Hz (SXGA), 1600 x 1200/60Hz (UXGA), 1680 x 1050/60Hz (WSXGA), 1920 x 1200/60Hz (WUXGA), 1920 x 1080/59.94p (HDTV)
    1080/50p: 1280 x 1024/60Hz (SXGA)**, 1600 x 1200/60Hz (UXGA)**, 1680 x 1050/60Hz (WSXGA) **, 1920 x 1200/60Hz (WUXGA)**, 1920 x 1080/50p…

    Our Price: $1,852.50

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • AVMATRIX

    Item# LIL-VC12-4K

    AVMatrix VC12 4K is a single channel HDMI 2.0 4k 60 PCI-E capture card that supports 1 channel HDMI 2.0 input loop-out, and capture. Multiple PCIE capture cards can be installed on a single PC to work simultaneously and achieve simultaneous lives streaming on multiple platforms.

    The VC12 4K has a built-in HDMI loop-out, supports loop-out function which can transmit the images directly to another display. It allows users to play games and realize live broadcast and acquisition at the same.. .

    Our Price: $279.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • AVMATRIX

    Item# LIL-VC41

    AVMatrix VC41 is a 4-channel 3G-SDI PCIE Capture Card. It can support up to 4 channels of SDI signal source input, connected to 4 SDI devices, such as game consoles, cameras, switchers, etc. Multiple PCIE capture cards can be installed on a single PC, so you can support multiple devices simultaneously in order to achieve simultaneous live streaming on multiple platforms.

    The PCIE Card can be widely used in various fields, such as church, livesports, virtual studio, game live, education,…

    Our Price: $379.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • AVMATRIX

    Item# LIL-VC42

    AVMatrix VC42 is a 4-channel HDMI PCIE capture card. Support 4 channels of HDMI signal source input while connected to up to 4 HDMI devices, such as game consoles, cameras, switchers, etc. Multiple PCIE capture cards can be installed on a single PC, supporting multiple devices running simultaneously to achieve simultaneous live streaming on multiple platforms..

    The PCIE capture card provides superior performance, single-channel input and capture up to 1080p60 with bit rates up to 200Mbp,…

    Our Price: $319.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • Magewell

    Item# MGW-11040

    Magewell’s 11040 Pro Capture HDMI One-Channel PICe 2.0 Capture Card is a capture card for HDMI, Composite, Component and S-video with audio. This card captures HDMI + Embedded Audio via the HDMI port and also the following video signals via the included breakout cable: Component + Audio, Composite + Audio and S-video + Audio. It has an adaptive HDMI equalizer support for cables lengths up to 98ft.

    This card is loaded with features such as ultra low latency, multiple capture…

    Our Price: $299.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List

  • Datapath

    Item# VISIONSC-HD4PL-H

    A wide range of applications demand multiple channels of video capture from a single card. Markets include advanced medical and machine vision capture, military applications and security/surveillance systems. Suitable for these markets and others Datapath’s VisionSC-HD4+ audio video capture card provides an outstanding, powerful solution for multiple HDMI or DVI video capture with support for HDCP.

    The VisionSC-HD4+ has four onboard HDMI 1.4 video capture channels; two channels supporting 3840…

    Our Price: $3,165.00

    Availability: Typically Ships in 3-10 Business Days

    Add to Wish List














  • {{if LT==”b”}}

    {{/if}}
    {{if LT==”c”}}

    {{/if}}
    {{if LT==”s”}}

    {{/if}}
    {{if NI


    {{/if}}
    {{if LT==””}}

    {{/if}}



    {{if M != “”}}

    ${M}


    {{/if}}

    Item# ${ID}


    {{if IV != “”}}
    {{html GetVariantInfo(IV)}}
    {{else}}

    {{/if}}

    {{html GetDescription(D)}}


    {{if IV == “”}}
    Read More
    {{/if}}
    {{if F == “F”}}


    {{else}}
    {{if F == “Y”}}


    {{else}}

     


    {{/if}}
    {{/if}}


    {{if CI }}


    {{if A == “E”}}

    Availability: Extended Lead Time. Email, Chat or Call Us for more details.


    {{else}}

    Availability: ${AVB}


    {{/if}}
    Item Details
    {{else}}
    {{if RA }}

    Our Price: ${OP}


    {{if RT == “M”}}

    After Rebate: ${RP}


    {{else}}

    After Savings: ${RP}


    {{/if}}
    {{else}}
    {{if SA }}

    Our Price: ${OP}

    Special Price: ${SP}


    {{else}}

    Our Price: ${P}


    {{/if}}
    {{/if}}
    {{if A == “E”}}

    Availability: Extended Lead Time. Email, Chat or Call Us for more details.


    {{else}}

    Availability: ${AVB}


    {{/if}}

    Add to Wish List
    {{/if}}

    PCI-Express 4.

    0 vs 3.0 Video Card Performance

    Always look at the date when you read an article. Some of the content in this article is most likely out of date, as it was written on November 30, 2020. For newer information, see our more recent articles.

    Table of Contents

    Introduction

    PCI-Express has been the standard for connecting video cards and other expansion devices inside of computers for many years now, and several generations of the technology have now passed. With each of those generations, the amount of data that can be transferred over the PCIe connection has increased. How much impact does that have on modern video cards? Is there any benefit to running a PCIe 3.0 card in a 4.0 slot, or loss if using a 4.0 card in a 3.0 slot? We frequently get questions like these during the consultation process with new customers, so I thought it would be worth taking some time to test in order to better answer these queries.

    Test Methodology

    In order to test the impact of PCI-Express bandwidth on performance, we are going to look at two video cards in a system where we can control the PCIe slot generation within the BIOS. Why two cards? Because one is natively PCIe 3.0 (NVIDIA’s Titan RTX) while the other uses PCIe 4.0 (their GeForce RTX 3090). Both have 24GB of memory, to avoid the amount of onboard VRAM affecting anything, and are effectively the top performing card from their respective GPU families. To minimize the impact of the CPU, we went with the top-end of AMD’s new Ryzen 5000 series, the 5950X, installed on a Gigabyte X570 AORUS ULTRA motherboard which has a BIOS setting for selecting which PCIe version is used.

    Here are the full specifications of the system we used for this testing:

    With this hardware configuration, we tested each of the two video cards in each of the four PCIe Slot Configuration settings (Gen1 through Gen4). Most of the questions we get from prospective customers center around PCIe Gen3 and Gen4, but by going further back with our tests we can get a better picture of how PCIe bandwidth impacts video card performance. For example, PCIe Gen2 on a full x16 size slot (which these video cards were using) is roughly equivalent in bandwidth to PCIe Gen3 x8, and that is a common setting for motherboards to use when running multiple video cards on chipsets that don’t have a massive number of PCIe lanes available. Likewise, PCIe Gen1 at x16 should be comparable to PCIe Gen3 at x4 – and PCIe Gen3 at x16 is on par with PCIe Gen4 at x8.

    Finally, on the software side, we used a handful of benchmarks across three main types of applications:

    • OctaneBench, Redshift Demo, and V-Ray Next Benchmark for GPU based rendering
    • PugetBench for DaVinci Resolve Studio and Neatbench for post-production (video editing)
    • Unigine Superposition (at a couple of different resolutions) for game engines

    Benchmark Results

    Here are the results of our testing, split into galleries by application type, with some analysis after each set of charts:

    Previous

    Next

    GPU Based Rendering Engines

    Previous

    Next

    None of these rendering benchmarks show much difference in performance between the various generations of PCI-Express. There is a slight curve in Redshift, with about an 8-second slowdown from PCIe Gen4 to Gen1 on the RTX 3090 and 5 seconds on the Titan RTX. V-Ray Next shows nothing outside the test’s margin of error, and while there is a small drop on OctaneRender it is only around 2% (so that may well be within the margin of error too).

    It is worth remembering the way that GPU rendering works, though: scene data is sent to the card over the PCIe connection, and then the processing is all done on the video card(s), then the resulting image is sent back to the system to be displayed and/or saved. The speed of the PCIe bus is going to impact how quickly the data can be moved back and forth, but won’t impact the actual computations happening on the card. That probably explains why we see so little impact from the older versions of PCI-Express in this test.

    There are also some notable exceptions to this, which are simply outside the purview of these benchmarks. For example, some rendering engines support “out of core memory” – which is where some of the scene data is stored in main system memory if there isn’t enough dedicated video memory on the card(s) themselves. In that situation, there would be a lot more data being transmitted over PCI-Express, throughout the rendering process, and thus the speed of that connection would be a lot more important.

    Previous

    Next

    Post Production

    Previous

    Next

    Post-production makes much more interactive use of the video than rendering, so here we see large performance differences across the various PCI-Express generations. DaVinci Resolve shows a steady drop from PCIe Gen4 down to Gen1 on the RTX 3090, while the Titan RTX has effectively no difference between Gen4 and Gen3, but then drops when using Gen2 and Gen1.

    NeatBench, which tests the Neat Video noise reduction algorithm’s performance, shows an even larger reduction when using the older version of PCI-Express… to the point where an RTX 3090 on Gen1 is only providing half the speed of the same card running on Gen4. Again, though, the Titan RTX doesn’t benefit from Gen4 vs Gen3 – presumably, because it is a Gen3 card itself, so even if the system it is in is capable of the newer PCIe Gen4 speeds the Titan is stuck at Gen3.

    Previous

    Next

    Game Engines

    Previous

    Next

    We don’t generally test game performance here at Puget Systems, as so many other outlets already look at that subject in great depth, but I thought I would try seeing if Unigine Superposition showed any differences across the different PCIe speeds. It did not. There is a small, sub-2% difference – similar to what we saw with OctaneBench – but even if that is being cased by the generational difference it is so small that a variance like that would not be noticeable when playing games. Again, this is likely due to the way this benchmark works: if all of the data for the test scenes can fit in the system memory, then once it is loaded up at the start the speed of the PCIe bus will no longer matter. Real gaming would see a different usage pattern, but even there I suspect that PCIe Gen4 vs Gen3, at least, would have no measurable performance impact.

    Perhaps we can revisit this with more of a focus on game development in the future, as my colleague Kelly Shipman has been doing amazing work on testing the Unreal Engine.

    Conclusion

    For applications where data is constantly traveling across the PCI-Express bus, we can see that the generational bandwidth differences do have a very measurable impact on real-world performance. The best examples of that in the tests we conducted for this article were those looking at post-production & video editing, which exhibited substantial gains moving up from PCIe Gen1 to Gen2, moderate gains from Gen2 to Gen3, and then a small boost from Gen3 to Gen4 on the RTX 3090 (which is, itself, a Gen4 card). The Titan RTX, a Gen3 card, did not show a difference between running on PCIe Gen3 vs Gen4.

    Other programs where data is only sent across PCIe before and after a long calculation did not see that sort of difference, however. At least within the manufacturer benchmarks we utilized, there was at best a small gain when using the latest Gen3 and Gen4 speeds – but definitely nothing like what we saw with video editing.

    In the end, though, the PCI-Express Gen1 and Gen2 stuff is mostly an academic question. Virtually all modern motherboards are going to run at PCIe Gen3 or Gen4, and if running a single video card then they pretty much all will offer full x16 lane support as well. This gets a little trickier when running multiple video cards, which is common for some of these professional workloads, because while the PCIe generation isn’t going to drop the number of lanes available per slot/card definitely can. PCIe Gen3 at x8 lanes is going to be roughly on par with PCIe Gen2 at a full x16 lanes, and Gen3 x4 is close to Gen1 at x16… so depending on your exact motherboard and GPU configuration it is entirely possible to end up with lower bandwidth per card. The good news is that this looks like it will have little negative impact on GPU based rendering, which is one of the places where having a lot of video cards can really shine – but if you are working with video editing or some other application that depends on sending a lot of data back and forth to the graphics card(s), then it is a good idea to ensure that your system is providing the most bandwidth possible over PCI-Express.

    Does putting a PCIe Gen3 video card in a Gen4 slot improve performance?

    No, if the graphics card itself is PCIe 3.0 then putting it in a faster 4.0 slot will not provide any benefit since they will be operating at Gen3 speed.

    Does putting a PCIe Gen4 video card in a Gen3 slot reduce performance?

    In some applications, yes – there can be a small performance drop when running a PCI-Express 4.0 capable card in a system/slot that is only using PCIe 3.0. We did not find any impact for gaming or GPU-based rendering, but we did measure a small decline (less than 5%) with video editing in DaVinci Resolve and a little bit larger drop (~10%) with noise reduction in Neat Video.

    Tags: AMD, AMD Ryzen 5000 Series, CPU, Express, GeForce, GPU, Motherboard, NVIDIA, PCI, PCI-E, PCIe, Processor, RTX 3090, Titan RTX, Video Card

    GeForce FX5500 PCI video card. When not enough PCI-Express

    Nowadays, buying a second and / or third monitor for a PC is not a problem – used components are inexpensive.
    But with their connection, everything is not so simple, especially if the PC itself is far from the first freshness: there may be no second output on video cards built into the motherboard, PCI-E connectors may be in short supply, USB3.0-VGA or USB3.0 converters -HDMI – not applicable due to the lack of USB3.0 itself, and their USB2.0 counterparts are more expensive than the motherboard itself.
    In this case, the problem can be solved by…

    … using a PCI graphics card.
    Usually, with these words, something of the S3 Trio level with a memory capacity of 1-2 megabytes is presented, practically helpless by modern standards.

    In fact, the choice is far from limited to this.
    It would seem that the star of PCI video cards finally and irrevocably declined in the late 1990s, but in fact it is not. The most advanced board I know of is the ATI Radeon HD5450 with 2 GB DDR3.

    It is quite useful for those who want to squeeze a little more out of their computer, but there are no free PCI-E expansion slots left (including due to the fact that PCI is better able to tolerate computer overclocking). In my opinion, there is some irony in the fact that video cards equipped with a slow PCI interface have outlived the AGP standard, which was once proposed specifically to solve the problem of the “bottleneck” of the graphics subsystem.

    The ordered video card arrived in a large bag of bubble wrap by Dutch mail in just 8 days – almost a record.

    Unfortunately, the customs officers were agitated by the fact that there was a video card in the package as such, and they decided to open it. You must have heard a lot of stories about cryptocurrency mining?

    They tried in vain – you can’t mine on it. And, apparently, they were very upset about it.
    The board was packed in a rather uninformative cardboard box.

    Of all the inscriptions on the box, the only thing that turned out to be true was that it was a GeForce and that it was compatible with Windows 7.

    Inside the box are a graphics card packed in an antistatic bag and a CD with drivers.
    There is no DVI-VGA adapter.

    More than half of the front side of the PCB is covered by a heatsink. VGA, S-Video and DVI-I outputs are available.

    There is thermal paste between it and the video core.

    But it is not on memory chips, and in general, the heat sink only interferes with the removal of heat from them.

    Its finning is essentially purely decorative, so it is recommended to replace it.

    For example, on a chipset heatsink from an old motherboard.

    It even matches perfectly.

    For the sake of interest, I decided to wash the core from thermal paste, and then a surprise awaited me.

    However, the only difference between the FX5500 and FX5200 is slightly different operating frequencies, which in our time with respect to a DirectX 9 video card can be of purely academic interest (and for others there is PowerStrip).

    There are almost no details on the back of the board, but there is a sticker with the date of manufacture. According to her, the board was made in August 2016. And there is also a sticker stating that the board is actually an ATI Radeon 9550.

    There is a family resemblance, doesn’t it?

    When connecting two monitors, the board pre-initializes the one connected to VGA, but further considers the main one connected to DVI.

    The drivers install fine, the card appears in the device manager, after which you can use the desktop extension.

    Parameters via GPU-Z are subtracted.

    A couple of desktops at a resolution of 1600×1200 with a 32-bit color depth are supported by the video card.

    In terms of performance, the motherboard developed in 2003 is approximately at the level of the MX400, so it makes no sense to run modern benchmarks on it – now it’s just two additional video outputs in a PC.
    Conclusion : at a price of just under $17 at the time of purchase, the video card can easily replace two USB2.0-VGA or USB2.0-DVI converters with a total cost of $80-90. So if you need to add a couple of video outputs, and PCI-E is in short supply, but there are free PCI, this is a good solution to the problem.

    Testing video card interfaces: PCI vs. PCI Express

    Table of contents

    • Introduction
    • Video card interfaces: history of development
    • PCI to PCI-e adapter
    • Test bench
    • Socket 370 platform and Full HD content
    • Socket 478 platform, 4K content and YouTube
    • Gaming tests on the Socket 478 platform
      • Test results
    • Gaming tests on the LGA 775 platform
      • Test results
    • Conclusion

    Introduction

    The internal interfaces of a personal computer are actively developing, replacing each other with enviable constancy and not always providing backward compatibility with previous versions. In this article, you will learn what outdated PCs are capable of when you install a modern video card in them. Let’s start with a brief history.



    Testing the MSI 2Way SLI HB Bridge M with a pair of GeForce GTX 1070



    Together with the announcement of video cards of the Pascal generation, Nvidia announced that only configurations of two models would support SLI mode in games, and introduced a new development – SLI HB Bridge. Pretty soon, her partners took the initiative, releasing a variety of versions – from inexpensive to stylish. In this review, we will compare the MSI bridge with its opponents and find out what new it brings.


    Overclocking the Radeon R7 240 with GDDR5 memory: theory and practice



    Last time we studied the overclocking potential of the Radeon R7 240. After all the modifications, its video core was overclocked by 62% – up to 1300 MHz, but in real gaming applications, the performance increase was only 20-30%, which was caused by a slower and outdated memory DDR3. It’s time to find out what the Radeon R7 240 is capable of with 1 GB of GDDR5 video memory on board.


    DIY:
    PCI riser



    The latest motherboards get rid of PCI, and if support remains, then it is in an extremely truncated form – one or two slots in the most inconvenient places on the motherboard. What to do, give up the sound card and switch to the “built-in” or run to the store for an updated version with a PCI-e interface? However, you can go the third way – “transfer” the slot, which we will devote this article to.

    recommendations

    Video card interfaces: history of development

    If we talk about interfaces for graphics accelerators, there have been three global changes over the past twenty-five years: PCI, AGP and PCI Express.

    The PCI bus was developed by Intel in 1991 and has gone through several upgrades. The most common 32 bit version worked at speeds from 133 to 266 MB/s. Moreover, these speeds were shared by all devices connected to a common bus.

    Very soon, already in 1996, Intel introduced a new system interface specifically for video cards – Accelerated Graphics Port (AGP). This standard has gone through several updates. In 2003, the Intel 865 series chipsets came out with support for AGP 8X, which provided data transfer rates up to 2 GB / s. Motherboards with similar capabilities also existed for AMD processors, being actively produced in 2003 and 2004. For example, nforce3 and Socket 939 chipset models supported AGP 8X and dual-core Athlon CPUs.

    The PCI Express standard was adopted in 2002, the sixteen lanes of the first version provided a transfer rate of 4 GB / s in each direction. Already in 2004, motherboards were actively sold, in which AGP 8X was replaced by a new interface. As a result, AGP 8x almost immediately after the release was forced to compete with progressive PCI-E. After all, he received full support from manufacturers, and compatibility with the old interface was provided through special bridges soldered on video cards.

    Nvidia provided support for AGP in the GeForce 6 and GeForce 7 graphics accelerator lines, but after the release of the G80 in 2006, they abandoned it. Her opponent acted somewhat differently – ATI/AMD video cards of the HD 3000 and HD 4000 series, designed for the AGP connector, were produced until 2008.

    PCI to PCI-e adapter

    But have graphics manufacturers managed to squeeze the full potential out of previous platforms? Let’s try to answer this question. To do this, you just need to install a modern video card in an outdated PC and evaluate its capabilities.

    Not an easy task – adapters that allow you to install a PCI-E model in an AGP slot cannot be found on sale, and it’s not a fact that they exist at all. However, there are options that allow you to install a PCI-E device in a PCI slot, so we will use this. That’s right, the tested video cards will be installed in the board slot, which was declared obsolete for this purpose twenty years ago!

    The adapter used is based on the PLX PEX8112 chip. Despite the full-size soldered 16x connector, PEX8112 only supports PCI-express x1 mode. It should be noted that this adapter is not equipped with additional power and is not intended for installing video cards. The fact is that the PCI slot, according to the specification, is able to provide up to 500 mA of current at a voltage of 12 V, that is, only 6 W of power.

    This means that installing a video card directly into the adapter can damage your computer! To get around this limitation, video cards were connected through a riser with additional power.

    The adapter was correctly identified by the system and did not require the installation of drivers and additional settings on all tested systems.

    Test bench

    This article will test or mention the following video card models:

    • GeForce 6200 AGP. Uses GPU NV44 – the video core of the lower price range of 2004 release. Supports DirectX 9.0c technology and PureVideo video processing acceleration. This graphics processor is notable for the fact that it was produced in versions with native support for AGP and PCI-E without the use of additional adapters.
    • GeForce 7600 GT. G73 GPU, 2006, mid-range, DirectX 9.0c and PureVideo support. Let me remind you that this is the latest generation of Nvidia with support for the Accelerated Graphics Port.
    • GeForce 8500 GT. Budget graphics core G86, support for DirectX10 and PureVideo second generation. Such video cards were produced only for PCI Express.
    • Radeon HD 3650. DirectX 10 and UVD video stream acceleration technology version 1. The penultimate generation of AMD graphics accelerators with AGP support.
    • GeForce GT 710 2 GB DDR3. The most budget version of Kepler, support for DirectX 12 and VP5 video stream acceleration.
    • Radeon R7 240 2 GB DDR3. One of the most affordable AMD graphics cards with GCN architecture, support for DirectX 12 and UVD 3.1. At one time, I already talked about it in the article “Overclocking the Radeon R7 240: theory and practice.”

    Three configurations based on Intel platforms from different years were assembled for research.


    Motherboard

    Chipset

    CPU

    RAM

    Soltek SL-65EB,

    Socket 370

    Intel 440BX

    Intel Celeron 950 @ 1093 MHz

    768 MB PC100 @ 115 MHz

    MSI 865GM2,

    socket 478

    Intel 865e

    Intel Pentium 4 3. 2 GHz

    2GB DDR1 400MHz

    ASUS P5B Deluxe,

    LGA 775

    Intel 965

    Intel Xeon E5440 2.8 GHz

    6GB DDR2 667MHz

    Socket 370 platform and Full HD content

    The legendary Intel 440BX chipset supports up to 1 GB of RAM, 768 MB installed on the test platform. This volume is quite enough to run Windows 7.

    At the first stage of experiments, I chose this particular operating system, hoping to be able to use fresh software and drivers. The “Seven” itself was installed without problems, but with the software and drivers, everything is not so smooth.

    Socket 370 processors do not support SSE2 instructions, and most modern programs and video card drivers require this set of instructions. And the OS itself turned out to be too heavy for the system under test.

    But text editors and other simple programs are available. But there were problems with video playback, the modern popular h364 codec turned out to be an unbearable burden for the 2001 Celeron. All hope for the function of accelerating video decoding by the GPU. For each of the studied video cards, the manufacturers announced the corresponding proprietary technologies.

    Tested using popular video players VLC and Media Player Classic Home Cinema. Unfortunately, none of the tested video cards with AGP support coped with the task. The tests failed the GeForce 6200 AGP, GeForce 7600 GT and Radeon HD 3650. Judging by the information that can be found on the net, the corresponding technologies do not perform all decoding functions in their case, and the computational load from the processor is not completely removed.

    It’s worth noting here that I didn’t have a Radeon HD 4670 at hand, the most advanced AGP video card released by AMD/ATI – it supports UVD 2 technology and should theoretically fully accelerate h364 video playback.

    Now let’s connect the GeForce 8500 GT, GeForce GT 710 and Radeon R7 240 video cards through the adapter. These models coped with their task, the Full HD trailer of the film “Ghost in the Shell” (~34 Mbps bit rate) loaded the processor by about 50%. A slight twitching of the video was recorded only in the first seconds after opening the file.

    I note that I managed to set up only the Media Player Classic Home Cinema player, and the hardware acceleration in it did not work out of the box. The player was installed as part of the K-Lite Codec Pack Mega, the Mega edition is recommended by the creators for Windows XP. However, the default installation, using the LAV filter, did not bring results, and hardware acceleration was not involved.