EconPapers    
Economics at your fingertips  
 

IMPROVING PARALLEL PROGRAMMING IN THE COMPUTE UNIFIED DEVICE ARCHITECTURE USING THE UNIFIED MEMORY FEATURE

Alexandru Pîrjan () and Dana-Mihaela Petroşanu ()
Additional contact information
Alexandru Pîrjan: Faculty of Computer Science for Business Management, Romanian-American University, 1B, Expozitiei Blvd., district 1, code 012101, Bucharest, Romania
Dana-Mihaela Petroşanu: Department of Mathematics-Informatics I, University Politehnica of Bucharest, 313, Splaiul Independentei, district 6, code 060042, Bucharest, Romania

Journal of Information Systems & Operations Management, 2014, vol. 8, issue 2, 352-362

Abstract: One of the most important improvements within the Compute Unified Device Architecture (CUDA) 6.5 version, launched in August 2014, is the support for Unified Memory, a feature that simplifies the memory management, by providing a unified pool of managed memory, shared between the Central Processing Unit (CPU) and the Graphic Processing Unit (GPU).

Keywords: CUDA; parallel programming; Unified Memory; Graphics Processing Unit; Central Processing Unit (search for similar items in EconPapers)
Date: 2014
References: Add references at CitEc
Citations:

Downloads: (external link)
http://www.rebe.rau.ro/RePEc/rau/jisomg/WI14/JISOM-WI14-A14.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:rau:jisomg:v:8:y:2014:i:2:p:352-362

Access Statistics for this article

More articles in Journal of Information Systems & Operations Management from Romanian-American University Contact information at EDIRC.
Bibliographic data for series maintained by Alex Tabusca ().

 
Page updated 2025-11-08
Handle: RePEc:rau:jisomg:v:8:y:2014:i:2:p:352-362