Flavors and Types of IPC Mechanisms in Linux (Linux 中到底有多少种 IPC)

In the Linux world, there are many interprocess communication (IPC) methods available for system programmers. After some web searching, I found that there are rarely blogs or books that summarize them all. This article roughly lists them all with minimal explanation and some links to official manuals.


POSIX-flavor IPCs include semaphores, shared memory, and message queues.


There are also two subtypes of semaphores: named and unnamed. For details, see man 7 sem_overview.

sem_open(3)    // named
sem_close(3)   // named
sem_unlink(3)  // named
sem_init(3)    // unnamed
sem_destroy(3) // unnamed

shared memory

POSIX shared memory (shm) IPC only has named version, the shm objects are stored in Linux tmpfs (by default /dev/shm).

shm_open (3)
shm_unlink (3)

message queue

POSIX message queue (mq) IPC objects are also named, but they are stored in a special filesystem, mqueue. A mqueue filesystem can be mounted with the following command,

mount -t mqueue none /dev/mqueue

The libc APIs and their syscall counterparts are listed below, details see man 7 mq_overview.

Library interface    System call
mq_close(3)          close(2)
mq_getattr(3)        mq_getsetattr(2)
mq_notify(3)         mq_notify(2)
mq_open(3)           mq_open(2)
mq_receive(3)        mq_timedreceive(2)
mq_send(3)           mq_timedsend(2)
mq_setattr(3)        mq_getsetattr(2)
mq_timedreceive(3)   mq_timedreceive(2)
mq_timedsend(3)      mq_timedsend(2)
mq_unlink(3)         mq_unlink(2)

SystemV / XSI IPCs

Similar to the POSIX IPCs, SystemV/XSI-flavor IPCs have the same three types IPCs under Linux: semaphore, shared memory, and message queue. For details, see man 7 sysvipc.


semget(2) // Create a new set or obtain the ID of an existing set.  This call returns an identifier that is used in the remaining APIs.
semop(2)  // Perform operations on the semaphores in a set.

semctl(2) // Perform various control operations on a set, including deletion.

shared memory

shmget(2) // Create a new segment or obtain the ID of an existing segment.  This call returns an identifier that is used in the remaining APIs.

shmat(2)  // Attach an existing shared memory object into the calling process's address space.

shmdt(2)  // Detach a segment from the calling process's address space.

shmctl(2) // Perform various control operations on a segment, including deletion.

message queue

msgget(2) // Create a new message queue or obtain the ID of an existing message queue.  This call returns an identifier that is used in the remaining APIs.

msgsnd(2) // Add a message to a queue.

msgrcv(2) // Remove a message from a queue.

msgctl(2) // Perform various control operations on a queue, including deletion.


There are some IPCs widely implemented in UNIX-like OSs, but not specified in the POSIX standard. These include pipe, FIFO, signal, and unix domain socket.

Pipe & FIFO

The pipe IPC and FIFO are fundamentally the same, except that FIFO is named while pipe is not.

popen(3)      // pipe

pclose(3)     // pipe

mkfifo(3)     // FIFO

mkfifoat(3)   // FIFO


For details see man 7 signal.







// and more ...

Unix Domain Socket

Unix domain sockets (UDS) use the socket programming interface for local IPC. For details, see man 7 unix.

socket(AF_UNIX, ...) (2)

Modern Linux IPCs


The binder IPC was initially implemented for the Android OS and has been merged into Linux upstream.
For details, see The Android binderfs Filesystem — The Linux Kernel documentation.

DBus / kdbus

The DBus IPC is now widely used. It is implemented in userspace. Efforts are made towards a kernel version (kdbus), but it has not been merged upstream. For details, see dbus (www.freedesktop.org).

1 comment

  1. @me

    Growing up, I rode a wave of Linux enthusiasm in my late teens and learned a lot about operating systems along the way.

    Now I'm a professional working with microservice architecture, and it tickles me how the two are nearly isomorphic to each other.

    The biggest high level difference is micro services add significant bloat in their abstractions (containers, storage replication, IPC happens over networks not locally) and for that they gain the advantage of being nomadic across systems.

    Whenever I have an architecture problem, its reassuring to know an old Unix greybeard has likely already walked some version of this path.

Leave a Reply

Your email address will not be published. Required fields are marked *