本文介绍了企鹅,光油或鱿鱼?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们需要对静态图像的网页内容加速器在我们的Apache Web前端服务器的前面坐

We need a web content accelerator for static images to sit in front of our Apache web front end servers

我们的previous托管合作伙伴使用的Tux取得了巨大成功,我想这是我们正在使用的Red Hat Linux的一部分的事实,但它的最后一次更新是在2006年和似乎有未来发展的机会不大。我们的ISP建议我们在反向缓存代理的角色使用squid。

Our previous hosting partner used Tux with great success and I like the fact it's part of Red Hat Linux which we're using, but its last update was in 2006 and there seems little chance of future development. Our ISP recommends we use Squid in reverse caching proxy role.

企鹅和Squid之间有什么想法?兼容性,可靠性和未来的支持是我们作为性能同样重要。

Any thoughts between Tux and Squid? Compatibility, reliability and future support are as important to us as performance.

另外,我这里大约光油其他线程读取;人有鱿鱼相比光油任何实战经验,和/或企鹅,在高流量环境得到了什么?

Also, I read in other threads here about Varnish; anyone have any real-world experience of Varnish compared with Squid, and/or Tux, gained in high-traffic environments?

干杯

伊恩

更新:我们现在正在测​​试鱿鱼。使用AB拉相同的图像10,000次与100并发,无论阿帕奇自身和鱿鱼/阿帕奇通过请求烧毁得非常快。但鱿鱼只取得了一个请求到Apache的图像,然后提供他们全部从RAM中,而Apache的单独应用到餐桌,以服务于图像的大量工人。它看起来像鱿鱼会释放Apache的工人来处理动态页面工作。

UPDATE: We're testing Squid now. Using ab to pull the same image 10,000 times with a concurrency of 100, both Apache on its own and Squid/Apache burned through the requests very quickly. But Squid made only a single request to Apache for the image then served them all from RAM, whereas Apache alone had to fork a large number of workers in order to serve the images. It looks like Squid will work well in freeing up the Apache workers to handle dynamic pages.

推荐答案

在我的经验清漆比鱿鱼快得多,但同样重要的是它是更黑盒子比鱿鱼。光油,您可以访问调试问题时非常有用非常详细的日志。它的配置语言也更简单,更强大,鱿鱼的。

In my experience varnish is much faster than squid, but equally importantly it's much less of a black box than squid is. Varnish gives you access to very detailed logs that are useful when debugging problems. It's configuration language is also much simpler and much more powerful that squid's.

这篇关于企鹅,光油或鱿鱼?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

09-15 09:22