server become a bottleneck?

I'm currently writing a piece of software in erlang, which is now based on gen_server behaviour. This gen_server should export a function (let's call it update/1) which should connect using ssl to another service online and send to it the value passed as argument to the function.

Currently update/1 is like this:

update(Value) ->
  gen_server:call(?SERVER, {update, Value}).

So once it is called, there is a call to ?SERVER which is handled as:

handle_call({update, Value}, _From, State) ->
    {ok, Socket} = ssl:connect("remoteserver.com", 5555,  [], 3000).
    Reply = ssl:send(Socket, Value).
    {ok, Reply, State}.

Once the packet is sent to the remote server, the peer should severe the connection.

Now, this works fine with my tests in shell, but what happens if we have to call 1000 times mymod:update(Value) and ssl:connect/4 is not working well (ie is reaching its timeout)?

At this point, my gen_server will have a very large amount of values and they can be processed only one by one, leading to the point that the 1000th update will be done only 1000*3000 milliseconds after its value was updated using update/1.

Using a cast instead of a call would leave to the same problem. How can I solve this problem? Should I use a normal function and not a gen_server call?


From personal experience I can say that 1000 messages per gen_server process wont be a problem unless you are queuing big messages.

If from your testing it seems that your gen_server is not able to handle this much load, then you must create multiple instances of your gen_server preferably under a supervisor process at the boot time (or run-time) of your application.

Besides that, I really don't understand the requirement of making a new connection for each update!! you should consider some optimization like cached connections/ pre-connections to the server..no?

链接地址: http://www.djcxy.com/p/38184.html

上一篇: 如何调试无法启动的erlang应用程序

下一篇: 服务器成为瓶颈?