1. Get the event logs, errortraces, exceptions in one location and enable powerful search which can scale out seamlessly. Ideally one could/should use
logstash-(poor man’s *plunk alternative)
2. Create a search frontend for your application for frequently looked up items, cached items or just regular search based system what you would do for
catalog of items, issues(customer pain points) or gulp even primary data store for certain kind of applications
We have used and proposed Solr earlier – lately elastic search’s monitoring and simplicity of scaleout/availability is what has made us to push this lucene based alternative more for customers.
When you would not use this kind of search service
If there is hosted native search service which offers cheaper storage and better query times (based of faster backend) or you are ready to pay the $ for given throughput and storage.
sudo apt-get openjdk-7jdk
tar -xzf elasticsearch-0.90.7.tar.gz
/bin/elasticsearch -f (and you start dumping/querying the data) or put in init.d
ElasticHq – http://www.elastichq.org/support_plugin.html (available as hosted version too)
Kopf – https://github.com/lmenezes/elasticsearch-kopf
BigDesk – https://github.com/lukas-vlcek/bigdesk/ (more comprehensive imho)
OOB stats – http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
Paid – http://sematext.com/spm/elasticsearch-performance-monitoring/
Learn – http://www.elasticsearch.org/videos/bbuzz2013-getting-down-and-dirty-with-elasticsearch/
Use Oracle JDK
Use G1 GCC(http://www.infoq.com/articles/G1-One-Garbage-Collector-To-Rule-Them-All)
Kibana/logstash too work without any issues.
Caveat – Azure does not support multicast, so discovery becomes based on unicast – mechanism and pretty much coded into the configuration file.