Druid官方文档翻译:Druid快速入门

Druid Quickstart

In this quickstart, we will download Druid and set it up on a single machine. The cluster will be ready to load data after completing this initial setup.

Before beginning the quickstart, it is helpful to read the general Druid overview and the ingestion overview, as the tutorials will refer to concepts discussed on those pages.

Druid快速入门
在这个快速教程中,我们将下载Druid并在一台机器上安装它。完成此初始设置后,集群将准备好加载数据。

在开始快速启动之前,你可以参考这些页面Druid概述和数据导入概述是有帮助的,因为教程将讨论这些概念。

Prerequisites

You will need:

  • Java 8
  • Linux, Mac OS X, or other Unix-like OS (Windows is not supported)
  • 8G of RAM
  • 2 vCPUs

On Mac OS X, you can use Oracle’s JDK 8 to install Java.

On Linux, your OS package manager should be able to help for Java. If your Ubuntu- based OS does not have a recent enough version of Java, WebUpd8 offers packages for those OSes.

准备条件
你将需要:

  • Java 8
  • Linux、Mac OS X或其他类Unix操作系统(不支持Windows)
  • 8G of RAM
  • 2 vCPUs

在Mac OS X上,可以使用Oracle的JDK 8来安装Java。

在Linux上,您的OS包管理器应该能够帮助安装Java。如果你的基于Ubuntu的OS没有足够的Java版本,WebUPD8为那些OSES提供软件包。

Getting started

Download the 0.13.0-incubating release.(下载0.13.0版本)

Extract Druid by running the following commands in your terminal:(输入如下命令)

tar -xzf apache-druid-0.13.0-incubating-bin.tar.gz
cd apache-druid-0.13.0-incubating

In the package, you should find:

  • DISCLAIMER, LICENSE, and NOTICE files
  • bin/* – scripts useful for this quickstart
  • conf/* – template configurations for a clustered setup
  • extensions/* – core Druid extensions
  • hadoop-dependencies/* – Druid Hadoop dependencies
  • lib/* – libraries and dependencies for core Druid
  • quickstart/* – configuration files, sample data, and other files for the quickstart tutorials

Download Zookeeper

Druid has a dependency on Apache ZooKeeper for distributed coordination. You’ll need to download and run Zookeeper.

Druid依赖Apache ZooKeeper分布式协调服务,你需要下载并且运行Zookeeper。

In the package root, run the following commands:(输入如下命令)

curl https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz -o zookeeper-3.4.11.tar.gz
tar -xzf zookeeper-3.4.11.tar.gz
mv zookeeper-3.4.11 zk

The startup scripts for the tutorial will expect the contents of the Zookeeper tarball to be located at zk under the apache-druid-0.13.0-incubating package root.

本教程的启动脚本预期Zookeeper解压包的文件位于apache-druid-0.13.0孵化包根目录下的zk目录中。

Start up Druid services

From the apache-druid-0.13.0-incubating package root, run the following command:(按如下命令启动Druid服务)

 bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf

This will bring up instances of Zookeeper and the Druid services, all running on the local machine, e.g.:

 bin/supervise -c quickstart/tutorial/conf/tutorial-cluster.conf
[Thu Jul 26 12:16:23 2018] Running command[zk], logging to[/stage/apache-druid-0.13.0-incubating/var/sv/zk.log]: bin/run-zk quickstart/tutorial/conf
[Thu Jul 26 12:16:23 2018] Running command[coordinator], logging to[/stage/apache-druid-0.13.0-incubating/var/sv/coordinator.log]: bin/run-druid coordinator quickstart/tutorial/conf
[Thu Jul 26 12:16:23 2018] Running command[broker], logging to[//stage/apache-druid-0.13.0-incubating/var/sv/broker.log]: bin/run-druid broker quickstart/tutorial/conf
[Thu Jul 26 12:16:23 2018] Running command[historical], logging to[/stage/apache-druid-0.13.0-incubating/var/sv/historical.log]: bin/run-druid historical quickstart/tutorial/conf
[Thu Jul 26 12:16:23 2018] Running command[overlord], logging to[/stage/apache-druid-0.13.0-incubating/var/sv/overlord.log]: bin/run-druid overlord quickstart/tutorial/conf
[Thu Jul 26 12:16:23 2018] Running command[middleManager], logging to[/stage/apache-druid-0.13.0-incubating/var/sv/middleManager.log]: bin/run-druid middleManager quickstart/tutorial/conf

All persistent state such as the cluster metadata store and segments for the services will be kept in the var directory under the apache-druid-0.13.0-incubating package root. Logs for the services are located at var/sv.

Later on, if you’d like to stop the services, CTRL-C to exit the bin/supervise script, which will terminate the Druid processes.

所有持久状态(如集群元数据存储和服务段)都将保存在apache-druid-0.13.0孵化包根目录下的var目录中。服务的日志位于var/sv。

稍后,如果希望停止服务,请使用CTRL-C退出bin/supervise脚本,该脚本将终止Druid进程。

Resetting cluster state

If you want a clean start after stopping the services, delete the var directory and run the bin/supervise script again.

Once every service has started, you are now ready to load data.

Resetting Kafka

If you completed Tutorial: Loading stream data from Kafka and wish to reset the cluster state, you should additionally clear out any Kafka state.

Shut down the Kafka broker with CTRL-C before stopping Zookeeper and the Druid services, and then delete the Kafka log directory at /tmp/kafka-logs:

rm-rf/tmp/kafka-log

重置集群状态

如果希望在停止服务之后有一个干净的开始,请删除var目录并再次运行bin/supervise 脚本。

一旦每个服务都已启动,现在就可以加载数据了。

重置Kafka
如果完成了教程:从Kafka加载流数据并希望重置集群状态,则应该另外清除Kafka状态。

在停止Zookeeper和Druid服务之前,用CTRL-C关闭Kafka代理,然后在/tmp/kafka-logs删除Kafka日志目录:

Loading Data

Tutorial Dataset

For the following data loading tutorials, we have included a sample data file containing Wikipedia page edit events that occurred on 2015-09-12.

This sample data is located at quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz from the Druid package root. The page edit events are stored as JSON objects in a text file.

加载数据
教程数据集

对于下面的数据加载教程,我们已经包括了包含2015-09-12日发生的维基百科页面编辑事件的示例数据文件。

这个示例数据位于来自Druid包根的 quickstart/tutorial/wikiticker-2015-09-12-sampled.json.gz。页面编辑事件作为JSON对象存储在文本文件中。

The sample data has the following columns, and an example event is shown below:

示例数据具有以下列,示例事件如下所示:

  • added
  • channel
  • cityName
  • comment
  • countryIsoCode
  • countryName
  • deleted
  • delta
  • isAnonymous
  • isMinor
  • isNew
  • isRobot
  • isUnpatrolled
  • metroCode
  • namespace
  • page
  • regionIsoCode
  • regionName
  • user

The following tutorials demonstrate various methods of loading data into Druid, including both batch and streaming use cases.

下面的教程演示了将数据加载到Druid中的各种方法,包括批处理和流式用例。

Tutorial: Loading a file

This tutorial demonstrates how to perform a batch file load, using Druid’s native batch ingestion.

本教程演示如何使用Druid的本机批处理摄取来执行批处理文件加载。

Tutorial: Loading stream data from Kafka

This tutorial demonstrates how to load streaming data from a Kafka topic.

本教程演示如何从Kafka主题加载流数据。

Tutorial: Loading a file using Hadoop

This tutorial demonstrates how to perform a batch file load, using a remote Hadoop cluster.

本教程演示如何使用远程Hadoop集群执行批处理文件加载。

Tutorial: Loading data using Tranquility

This tutorial demonstrates how to load streaming data by pushing events to Druid using the Tranquility service.

本教程演示如何使用Tranquility服务将事件推送到Druid来加载流数据。

Tutorial: Writing your own ingestion spec

This tutorial demonstrates how to write a new ingestion spec and use it to load data.

本教程演示如何编写新的摄取规范并使用它加载数据。

 

推荐文章

沪公网安备 31010702002009号