本文介绍了用于Java和C / C ++之间的进程间通信的最快(低延迟)方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个Java应用程序,通过TCP套接字连接到以C / C ++开发的服务器。



服务器运行在同一台机器上,一个Solaris盒子(但我们正在考虑最终迁移到Linux)。
交换的数据类型是简单的消息(登录,登录确认,然后客户端请求的东西,服务器回复)。每个消息大约300字节长。



目前我们使用的是Sockets,一切正常,但我正在寻找一种更快的方式来交换数据)使用IPC方法。



我一直在研究网络,并提出了以下技术的参考:




  • 共享内存

  • 管道

  • 队列

  • 被称为DMA(直接内存访问)



,但我无法找到他们各自表演的正确分析,在JAVA和C / C ++(以便他们可以互相交谈),除了甚至管道,我可以想象如何做。



任何人都可以评论性能和;在这种情况下每种方法的可行性?
任何指针/链接到有用的实现信息?






EDIT / UPDATE



答案我在这里,我发现关于Unix域套接字,这似乎是建立只是通过管道,并将救我整个TCP堆栈的信息。
它是平台特定的,所以我打算用JNI或者或。



接下来可能的步骤将是直接实施管道,然后是共享内存虽然我已经警告过多的复杂性...






感谢您的帮助

解决方案

刚刚测试了在我的Corei5 2.8GHz的延迟,只有单字节发送/接收,
2 Java进程刚刚产生,分配具有taskset的特定CPU内核:

  TCP  -  25微秒
命名管道 - 15微秒



现在明确指定核心掩码,例如 taskset 1 java Srv taskset 2 java Cli

  TCP,相同内核:30微秒
TCP,显式不同内核:22微秒
命名管道,相同内核:4-5微秒!
命名管道,taskset不同的核心:7-8微秒!

因此

  TCP开销是可见的
调度开销(或核心缓存?)也是罪魁祸首

同时Thread.sleep(0)(作为strace显示导致单个sched_yield()Linux内核调用被执行)需要0.3微秒 - 所以命名管道计划到单核仍然有很多开销



一些共享内存测量:
2009年9月14日 - Solace Systems今天宣布,其统一消息平台API可以实现小于700纳秒的平均延迟,使用共享内存传输。



PS - 尝试共享内存第二天以内存映射文件的形式,
如果忙等待是可以接受的,我们可以减少延迟到0.3微秒
传递单个字节与这样的代码:

  MappedByteBuffer mem = 
new RandomAccessFile(/ tmp / mapped.txt,rw)。getChannel .map(FileChannel.MapMode.READ_WRITE,0,1);

while(true){
while(mem.get(0)!= 5)Thread.sleep(0); //等待客户端请求
mem.put(0,(byte)10); //发送回复
}

注意:Thread.sleep 2进程可以看到对方的更改
(我不知道另一种方式)。如果2个进程被强制为与taskset相同的核心,
延迟变为1.5微秒 - 这是一个上下文切换延迟



P.P.S - 和0.3微秒是一个好数!下面的代码精确到0.1微秒,只做一个原始字符串连接:

  int j = 123456789; 
String ret =my-record-key-+ j +-in-db;

PPPS - 希望这不是太多的话题,但最后我试图替换Thread.sleep 0)与增加一个静态volatile int变量(JVM恰好冲刷CPU缓存这样做),并获得 - 记录! - 72纳秒延迟java到java进程通信



但是,当强制使用同一个CPU核心时,volatile-incrementing JVMs永远不会产生控制因此产生正好10毫秒的延迟 - Linux时间量子似乎是5ms ...所以这应该只有当有一个备用的核心 - 否则睡眠(0)是更安全的。


I have a Java app, connecting through TCP socket to a "server" developed in C/C++.

both app & server are running on the same machine, a Solaris box (but we're considering migrating to Linux eventually).type of data exchanged is simple messages (login, login ACK, then client asks for something, server replies). each message is around 300 bytes long.

Currently we're using Sockets, and all is OK, however I'm looking for a faster way to exchange data (lower latency), using IPC methods.

I've been researching the net and came up with references to the following technologies:

  • shared memory
  • pipes
  • queues
  • as well as what's referred as DMA (Direct Memory Access)

but I couldn't find proper analysis of their respective performances, neither how to implement them in both JAVA and C/C++ (so that they can talk to each other), except maybe pipes that I could imagine how to do.

can anyone comment about performances & feasibility of each method in this context ?any pointer / link to useful implementation information ?


EDIT / UPDATE

following the comment & answers I got here, I found info about Unix Domain Sockets, which seem to be built just over pipes, and would save me the whole TCP stack.it's platform specific, so I plan on testing it with JNI or either juds or junixsocket.

next possible steps would be direct implementation of pipes, then shared memory, although I've been warned of the extra level of complexity...


thanks for your help

解决方案

Just tested latency from Java on my Corei5 2.8GHz, only single byte send/received,2 Java processes just spawned, without assigning specific CPU cores with taskset:

TCP         - 25 microseconds
Named pipes - 15 microseconds

Now explicitly specifying core masks, like taskset 1 java Srv or taskset 2 java Cli:

TCP, same cores:                      30 microseconds
TCP, explicit different cores:        22 microseconds
Named pipes, same core:               4-5 microseconds !!!!
Named pipes, taskset different cores: 7-8 microseconds !!!!

so

TCP overhead is visible
scheduling overhead (or core caches?) is also the culprit

At the same time Thread.sleep(0) (which as strace shows causes a single sched_yield() Linux kernel call to be executed) takes 0.3 microsecond - so named pipes scheduled to single core still have much overhead

Some shared memory measurement: September 14, 2009 – Solace Systems announced today that its Unified Messaging Platform API can achieve an average latency of less than 700 nanoseconds using a shared memory transport. http://solacesystems.com/news/fastest-ipc-messaging/

P.S. - tried shared memory next day in the form of memory mapped files,if busy waiting is acceptable, we can reduce latency to 0.3 microsecondfor passing a single byte with code like this:

MappedByteBuffer mem =
  new RandomAccessFile("/tmp/mapped.txt", "rw").getChannel()
  .map(FileChannel.MapMode.READ_WRITE, 0, 1);

while(true){
  while(mem.get(0)!=5) Thread.sleep(0); // waiting for client request
  mem.put(0, (byte)10); // sending the reply
}

Notes: Thread.sleep(0) is needed so 2 processes can see each other's changes(I don't know of another way yet). If 2 processes forced to same core with taskset,the latency becomes 1.5 microseconds - that's a context switch delay

P.P.S - and 0.3 microsecond is a good number! The following code takes exactly 0.1 microsecond, while doing a primitive string concatenation only:

int j=123456789;
String ret = "my-record-key-" + j  + "-in-db";

P.P.P.S - hope this is not too much off-topic, but finally I tried replacing Thread.sleep(0) with incrementing a static volatile int variable (JVM happens to flush CPU caches when doing so) and obtained - record! - 72 nanoseconds latency java-to-java process communication!

When forced to same CPU Core, however, volatile-incrementing JVMs never yield control to each other, thus producing exactly 10 millisecond latency - Linux time quantum seems to be 5ms... So this should be used only if there is a spare core - otherwise sleep(0) is safer.

这篇关于用于Java和C / C ++之间的进程间通信的最快(低延迟)方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-23 01:19