Spring Boot 2系列(五十四):分布式文件服务 FastDFS 安装与集成

项目需要用到文件服务,外网使用阿里云的 OSS 对象存储,内网优先考虑 FastDFS,备选 go-fastdfs

FastDFS 是一款开源的轻量级的分布式文件系统,功能主要包括:文件存储、文件同步、文件访问(文件上传、文件下载)等,解决了文件大容量存储和高性能访问的问题。FastDFS特别适合以文件为载体的在线服务,如图片、视频、文档等等。–摘自官方说明。

FastDFS 的官方文档极少,不便于快速学习用使用,需要个人整理总结。在源码的根目录下有个 INSTALL 的文件,里面记录的是安装 FastDFS 服务的安装步骤。

Github > FastDFSGitee > FastDFSgo-fastdfs fastdfs-client-java。另可参考 tobato/FastDFS_Client 文档。

FastDFS

概念介绍

特点

FastDFS特点如下:

  1. 分组存储,简单灵活;
  2. 等结构,不存在单点;
  3. 文件 ID 由 FastDFS 生成,作为文件访问凭证。FastDFS 不需要传统的 name server 或 meta server;
  4. 大、中、小文件均可以很好支持,可以存储海量小文件;
  5. 一台storage支持多块磁盘,支持单盘数据恢复;
  6. 提供了nginx扩展模块,可以和nginx无缝衔接;
  7. 支持多线程方式上传和下载文件,支持断点续传;
  8. 存储服务器上可以保存文件附加属性。

架构图

FastDFS架构图

组成部分

从架构图中可以看出 FastDFS 整个应用服务由三部分组成:

  • Tracker Server:跟踪服务器,管理 Storage Server 存储集群,负责调度工作,支持负载均衡,每个 Storage Server 启动后都会与跟踪服务建立连接,告知自己所属的 Group 信息,并保持心跳检测。Tracker Server 相当于 注册中心协调者的角色。
  • Storage Server:存储服务器,提供存储容量和备份服务,以 Group 为单位,每个 Group 内可以有多台 Storage Server,数据互为备份。
  • Client:客户端,执行上传下载数据的操作,通常是业务应用服务器。官方提供了 CJava 的客户端库,PHP 客户端扩展库。

服务安装

参考源码包根目录下的 INSTALL 的文件。

下载安装

  1. 安装依赖库:libfastcommon

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # step 1. download libfastcommon source codes and install it,
    # github address: https://github.com/happyfish100/libfastcommon.git
    # gitee address: https://gitee.com/fastdfs100/libfastcommon.git
    # command lines as:

    git clone https://github.com/happyfish100/libfastcommon.git
    cd libfastcommon;
    git checkout V1.0.43
    ./make.sh clean && ./make.sh && ./make.sh install
  2. 安装服务:fastdfs

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # step 2. download fastdfs source codes and install it, 
    # github address: https://github.com/happyfish100/fastdfs.git
    # gitee address: https://gitee.com/fastdfs100/fastdfs.git
    # command lines as:

    git clone https://github.com/happyfish100/fastdfs.git
    cd fastdfs;
    git checkout V6.06
    ./make.sh clean && ./make.sh && ./make.sh install

    安装完后,会创建 /etc/fdfs 目录,目录中存放配置的示例文件,如下示例:

    1
    2
    [root@localhost fdfs]# ls
    client.conf.sample storage.conf.sample storage_ids.conf.sample tracker.conf.sample

    可以看到,有客户端配置,存储服务配置,跟踪服务配置。

修改配置

本示例演示基于 2 台 CentOS 操作系统的服务器组分建一个最小集群的 FastDFS 服务。每一台台服务器上启动 1 个跟踪服务实例 和 1个存储服务实例,2 个存储服务实例组成一个组。

  1. 设置配置文件,两台服务器执行同样的操作

    进入 FastDFS 源码包的根目录

    1
    [root@localhost fastdfs-6.06]# ./setup.sh /etc/fdfs

    打开 setup.sh 脚本可以看到,其目的是将 conf 目录下的配置文件复制到 /etc/fdfs 目录,再次查看 /etc/fdfs 目录。执行完脚本后,/etc/fdfs 目录如下:

    1
    2
    3
    4
    # ls /etc/fdfs/
    client.conf mime.types storage_ids.conf tracker.conf.sample
    client.conf.sample storage.conf storage_ids.conf.sample
    http.conf storage.conf.sample tracker.conf
  2. 编辑和修改 tracker,storage,client 配置文件,两台服务器做同步操作

    1
    2
    3
    4
    5
    vi /etc/fdfs/tracker.conf
    vi /etc/fdfs/storage.conf
    vi /etc/fdfs/client.conf

    and so on ...

    可以将配置文件从 Linux 下载的本地编辑,再上传覆盖。重点要修改的配置项有以下几点:

    • 客户端配置文件:client.conf,设置跟踪服务的地址

      如果 base_path 的路径不存在,必须先创建,否则服务无法启动。

      1
      2
      3
      4
      5
      # 设置存放日志文件的基础路径(下面是默认路径)
      base_path = /home/yuqing/fastdfs
      # 跟踪服务的地址, 如果是集群部署, 可配置多个
      tracker_server = 192.168.50.129:22122
      tracker_server = 192.168.50.132:22122
    • 存储服务配置文件:storage.conf,设置跟踪服务的地址

      如果 base_pathstore_path0 的路径不存在,必须先创建,否则服务无法启动。

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      # 存放数据和日志的基础路径(下面是默认路径)
      base_path = /home/yuqing/fastdfs

      # 存储路径个数
      store_path_count = 1

      # 存放文件的存储路径, 如果不存在,则使用 base_path(不推建)
      store_path0 = /home/yuqing/fastdfs
      #store_path1 = /home/yuqing/fastdfs2
      # 跟踪服务的地址, 如果是集群部署, 可配置多个
      tracker_server = 192.168.50.132:22122
      tracker_server = 192.168.50.129:22122
    • 存储服务组和ID配置文件:storage_ids.conf,设置存储服务的 IP

      把存储服务划分到组中,一个组可以有多个存储服务。

      1
      2
      100001   group1  192.168.50.129
      100001 group1 192.168.50.132

    备注:上面示例都是把 跟踪服务,存储服务 都部署在一台物理服务器上了,所以 IP 相同。

启动服务

两台服务器执行同样的操作

  1. 启用跟踪服务

    1
    2
    # start the tracker server:
    /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart
  2. 启用存储服务

    1
    2
    # start the storage server:
    /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart
  3. 可选,将应用加入取 Linux 服务启用

    1
    2
    3
    # (optional) in Linux, you can start fdfs_trackerd and fdfs_storaged as a service:
    /sbin/service fdfs_trackerd restart
    /sbin/service fdfs_storaged restart

    就可以使用 Linuxsystemctl 命令执行 start,stop,reload,restart,status 等操作。如下:

    1
    2
    3
    systemctl restart fdfs_trackerd
    systemctl restart fdfs_storaged
    systemctl status fdfs_storaged

    示例操作:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    [root@localhost data]# systemctl status fdfs_trackerd
    ● fdfs_trackerd.service - LSB: FastDFS tracker server
    Loaded: loaded (/etc/rc.d/init.d/fdfs_trackerd; bad; vendor preset: disabled)
    Active: active (running) since 二 2020-04-21 16:03:51 CST; 45min ago
    Docs: man:systemd-sysv-generator(8)
    Process: 7781 ExecStop=/etc/rc.d/init.d/fdfs_trackerd stop (code=exited, status=2)
    Process: 7787 ExecStart=/etc/rc.d/init.d/fdfs_trackerd start (code=exited, status=0/SUCCESS)
    CGroup: /system.slice/fdfs_trackerd.service
    └─7792 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf

    4月 21 16:03:51 localhost.localdomain systemd[1]: Starting LSB: FastDFS tracker server...
    4月 21 16:03:51 localhost.localdomain fdfs_trackerd[7787]: Starting FastDFS tracker server:
    4月 21 16:03:51 localhost.localdomain systemd[1]: Started LSB: FastDFS tracker server.

    [root@localhost data]# systemctl status fdfs_storaged
    ● fdfs_storaged.service - LSB: FastDFS storage server
    Loaded: loaded (/etc/rc.d/init.d/fdfs_storaged; bad; vendor preset: disabled)
    Active: active (running) since 二 2020-04-21 16:06:02 CST; 43min ago
    Docs: man:systemd-sysv-generator(8)
    Process: 11309 ExecStop=/etc/rc.d/init.d/fdfs_storaged stop (code=exited, status=2)
    Process: 11315 ExecStart=/etc/rc.d/init.d/fdfs_storaged start (code=exited, status=0/SUCCESS)
    CGroup: /system.slice/fdfs_storaged.service
    └─11320 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf

    4月 21 16:06:02 localhost.localdomain systemd[1]: Starting LSB: FastDFS storage server...
    4月 21 16:06:02 localhost.localdomain fdfs_storaged[11315]: Starting FastDFS storage server:
    4月 21 16:06:02 localhost.localdomain systemd[1]: Started LSB: FastDFS storage server.
  4. 可以使用 Linux 的 netstat 命令工具查看服务占用的端口

    1
    2
    3
    4
    5
    [root@localhost fastdfs]# netstat -tp
    Active Internet connections (w/o servers)
    Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
    tcp 0 0 192.168.50.132:22122 192.168.50.132:42361 ESTABLISHED 7792/fdfs_trackerd
    tcp 0 0 192.168.50.132:42361 192.168.50.132:22122 ESTABLISHED 11320/fdfs_storaged
  5. 启动服务后,就可以在 base_path 路径看到 FastDFS 服务创建的存放数据的 data 目录,存放日志的 logs目录。如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    [root@localhost fastdfs]# ls
    data logs

    [root@localhost fastdfs]# ls data/
    00 12 24 36 48 5A 6C 7E 90 A2 B4 C6 D8 EA FC
    01 13 25 37 49 5B 6D 7F 91 A3 B5 C7 D9 EB FD
    02 14 26 38 4A 5C 6E 80 92 A4 B6 C8 DA EC fdfs_storaged.pid
    03 15 27 39 4B 5D 6F 81 93 A5 B7 C9 DB ED fdfs_trackerd.pid
    04 16 28 3A 4C 5E 70 82 94 A6 B8 CA DC EE FE
    05 17 29 3B 4D 5F 71 83 95 A7 B9 CB DD EF FF
    06 18 2A 3C 4E 60 72 84 96 A8 BA CC DE F0 storage_changelog.dat
    07 19 2B 3D 4F 61 73 85 97 A9 BB CD DF F1 storage_groups_new.dat
    08 1A 2C 3E 50 62 74 86 98 AA BC CE E0 F2 storage_servers_new.dat
    09 1B 2D 3F 51 63 75 87 99 AB BD CF E1 F3 storage_stat.dat
    0A 1C 2E 40 52 64 76 88 9A AC BE D0 E2 F4 storage_sync_timestamp.dat
    0B 1D 2F 41 53 65 77 89 9B AD BF D1 E3 F5 sync
    0C 1E 30 42 54 66 78 8A 9C AE C0 D2 E4 F6
    0D 1F 31 43 55 67 79 8B 9D AF C1 D3 E5 F7
    0E 20 32 44 56 68 7A 8C 9E B0 C2 D4 E6 F8
    0F 21 33 45 57 69 7B 8D 9F B1 C3 D5 E7 F9
    10 22 34 46 58 6A 7C 8E A0 B2 C4 D6 E8 FA
    11 23 35 47 59 6B 7D 8F A1 B3 C5 D7 E9 FB

    [root@localhost fastdfs]# ls logs/
    storaged.log trackerd.log

    可以看到,在 data 目录中创建了 256 个子目录,分别是 00 - FF,上传的文件就是存放在这些子目录中。

客户端监控

  1. 运行监听应用

    1
    2
    # such as:
    /usr/bin/fdfs_monitor /etc/fdfs/client.conf

    就会输出 跟踪服务 自己 和 收集到的存储服务的信息。如下示例:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    [root@localhost fastdfs-6.06]# /usr/bin/fdfs_monitor /etc/fdfs/client.conf
    [2020-04-27 11:04:39] DEBUG - base_path=/home/yuqing/fastdfs, connect_timeout=5, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

    server_count=2, server_index=1

    tracker server is 192.168.50.132:22122

    group count: 1

    Group 1:
    group name = group1
    disk total space = 24,032 MB
    disk free space = 23,958 MB
    trunk free space = 0 MB
    storage server count = 2
    active server count = 2
    storage server port = 23000
    storage HTTP port = 8888
    store path count = 1
    subdir count per path = 256
    current write server index = 0
    current trunk file id = 0

    Storage 1:
    id = 192.168.50.129
    ip_addr = 192.168.50.129 ACTIVE
    http domain =
    version = 6.06
    join time = 2020-04-27 10:37:53
    up time = 2020-04-27 10:37:53
    total storage = 24,032 MB
    free storage = 23,958 MB
    upload priority = 10
    store_path_count = 1
    subdir_count_per_path = 256
    storage_port = 23000
    storage_http_port = 8888
    current_write_path = 0
    source storage id = 192.168.50.132
    if_trunk_server = 0
    connection.alloc_count = 256
    connection.current_count = 1
    connection.max_count = 3
    total_upload_count = 4
    success_upload_count = 4
    total_append_count = 0
    success_append_count = 0
    total_modify_count = 0
    success_modify_count = 0
    total_truncate_count = 0
    success_truncate_count = 0
    total_set_meta_count = 4
    success_set_meta_count = 4
    total_delete_count = 0
    success_delete_count = 0
    total_download_count = 0
    success_download_count = 0
    total_get_meta_count = 0
    success_get_meta_count = 0
    total_create_link_count = 0
    success_create_link_count = 0
    total_delete_link_count = 0
    success_delete_link_count = 0
    total_upload_bytes = 9792
    success_upload_bytes = 9792
    total_append_bytes = 0
    success_append_bytes = 0
    total_modify_bytes = 0
    success_modify_bytes = 0
    stotal_download_bytes = 0
    success_download_bytes = 0
    total_sync_in_bytes = 9988
    success_sync_in_bytes = 9988
    total_sync_out_bytes = 0
    success_sync_out_bytes = 0
    total_file_open_count = 12
    success_file_open_count = 12
    total_file_read_count = 0
    success_file_read_count = 0
    total_file_write_count = 12
    success_file_write_count = 12
    last_heart_beat_time = 2020-04-27 10:41:53
    last_source_update = 2020-04-27 10:50:21
    last_sync_update = 2020-04-27 10:43:49
    last_synced_timestamp = 2020-04-21 17:37:48 (0s delay)
    Storage 2:
    id = 192.168.50.132
    ip_addr = 192.168.50.132 ACTIVE
    http domain =
    version = 6.06
    join time = 2020-04-21 16:06:02
    up time = 2020-04-27 10:21:49
    total storage = 42,547 MB
    free storage = 42,465 MB
    upload priority = 10
    store_path_count = 1
    subdir_count_per_path = 256
    storage_port = 23000
    storage_http_port = 8888
    current_write_path = 0
    source storage id =
    if_trunk_server = 0
    connection.alloc_count = 256
    connection.current_count = 1
    connection.max_count = 2
    total_upload_count = 4
    success_upload_count = 4
    total_append_count = 0
    success_append_count = 0
    total_modify_count = 0
    success_modify_count = 0
    total_truncate_count = 0
    success_truncate_count = 0
    total_set_meta_count = 4
    success_set_meta_count = 4
    total_delete_count = 0
    success_delete_count = 0
    total_download_count = 0
    success_download_count = 0
    total_get_meta_count = 0
    success_get_meta_count = 0
    total_create_link_count = 0
    success_create_link_count = 0
    total_delete_link_count = 0
    success_delete_link_count = 0
    total_upload_bytes = 9792
    success_upload_bytes = 9792
    total_append_bytes = 0
    success_append_bytes = 0
    total_modify_bytes = 0
    success_modify_bytes = 0
    stotal_download_bytes = 0
    success_download_bytes = 0
    total_sync_in_bytes = 9988
    success_sync_in_bytes = 9988
    total_sync_out_bytes = 0
    success_sync_out_bytes = 0
    total_file_open_count = 12
    success_file_open_count = 12
    total_file_read_count = 0
    success_file_read_count = 0
    total_file_write_count = 12
    success_file_write_count = 12
    last_heart_beat_time = 2020-04-27 10:41:49
    last_source_update = 2020-04-21 17:37:48
    last_sync_update = 2020-04-27 10:27:48
    last_synced_timestamp = 2020-04-27 10:50:21 (0s delay)

上传测试

执行 FastDFS 自带的测试工具:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@localhost fastdfs]# /usr/bin/fdfs_test /etc/fdfs/client.conf upload README.md
This is FastDFS client test program v6.06

Copyright (C) 2008, Happy Fish / YuQing

FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page http://www.fastken.com/
for more detail.

[2020-04-21 17:22:14] DEBUG - base_path=/home/yuqing/fastdfs, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

tracker_query_storage_store_list_without_group:
server 1. group_name=, ip_addr=192.168.50.132, port=23000

group_name=group1, ip_addr=192.168.50.132, port=23000
storage_upload_by_filename
group_name=group1, remote_filename=M00/00/00/wKgyhF6eu0aAYkY8AAAJkOV_zlg8145.md
source ip address: 192.168.50.132
file timestamp=2020-04-21 17:22:14
file size=2448
file crc32=3850358360
example file url: http://192.168.50.132/group1/M00/00/00/wKgyhF6eu0aAYkY8AAAJkOV_zlg8145.md
storage_upload_slave_by_filename
group_name=group1, remote_filename=M00/00/00/wKgyhF6eu0aAYkY8AAAJkOV_zlg8145_big.md
source ip address: 192.168.50.132
file timestamp=2020-04-21 17:22:14
file size=2448
file crc32=3850358360
example file url: http://192.168.50.132/group1/M00/00/00/wKgyhF6eu0aAYkY8AAAJkOV_zlg8145_big.md

可数据目录下可以看到刚上传的文件:

1
2
3
4
5
6
[root@localhost 00]# pwd
/fastdfs/data/00/00

[root@localhost 00]# ls
wKgyhF6eu0aAYkY8AAAJkOV_zlg8145_big.md wKgyhF6eu0aAYkY8AAAJkOV_zlg8145.md
wKgyhF6eu0aAYkY8AAAJkOV_zlg8145_big.md-m wKgyhF6eu0aAYkY8AAAJkOV_zlg8145.md-m

底层存储

从上面上测试上传返回的路径可以看 FastDFS 存储文件的大至策略。

1
/group1/M00/00/00/wKgyhF6evuyAFzudAAAJkOV_zlg7474.md
  • FastDFS 不对文件进行分块存储,直接存在到 Storage Server 上,所在存放超大文件取决于操作系统层的文件系统,通常不建议存放超大文件。
  • FastDFS 存储是基于分组(Group)的概念,同一个组可以有多个 Storage Server,其中一个 Storage Server 接收到上传的文件,会同步到同组的其它 Storage Server,互为备份。
  • 支持存储服务器在线扩容
  • 文件上传成功后返回的远程文件地址由:组名、磁盘目录,存储目录,文件名 组成。

配置文件解析

FastDFS 的配置文件示例在原码包的 config 目录下,如下:

1
2
3
4
[root@localhost conf]# pwd
/home/download/fastdfs/conf
[root@localhost conf]# ls
anti-steal.jpg client.conf http.conf mime.types storage.conf storage_ids.conf tracker.conf

client.conf

客户端配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# connect timeout in seconds(连接超时)
# default value is 30s(默认 30s)
# Note: in the intranet network (LAN), 2 seconds is enough.(内网 2 秒足够)
connect_timeout = 5

# network timeout in seconds(网络超时)
# default value is 30s(默认 30s)
network_timeout = 60

# the base path to store log files(日志基本路径)
base_path = home/logs/fastdfs

# tracker_server can ocur more than once for multi tracker servers.(支持多个 tracker server)
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,(HOST 可以是个逗号分隔的双重 IP 或 主机名)
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,(双重IP,必须是一个内网,一个外网, 即 tracker server 支持双网卡的物理服务器, 逗号分隔, 一个内网, 一个外网)
# or two different types of inner (intranet) IPs.(或 2 个不同类型的内网 IP)
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122

#tracker_server = 192.168.0.196:22122
tracker_server = 192.168.50.132:22122

#standard log level as syslog, case insensitive, value list: (标志的日志级别为 syslog(系统日志), 不区分大小写, 级别列表如下:)
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info

# if use connection pool(是否使用连接池, 默认false)
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed(连接最大空闲时间,超过则被关闭)
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# if load FastDFS parameters from tracker server(是否从 tracker server 加载 FastDFS 参数)
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker = false

# if use storage ID instead of IP address(是否使用 storage id 代替 IP 地址)
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false(只有当 load_fdfs_parameters_from_tracker 为 false 时才有效)
# default value is false
# since V4.05
use_storage_id = false

# specify storage ids filename, can use relative or absolute path(指定 storage id 文件名, 可以是相对路径, 也可以是绝对路径)
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf


#HTTP settings (http 设置, 指定 tracker server 端口)
http.tracker_server_port = 80

#use "#include" directive to include HTTP other settiongs(指定其它 HTTP 设置文件)
##include http.conf

http.conf

HTTP 配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# HTTP default content type(默认 http content type)
http.default_content_type = application/octet-stream

# MIME types mapping filename(文件名映射的 MIME)
# MIME types file format: MIME_type extensions
# such as: image/jpeg jpeg jpg jpe
# you can use apache's MIME file: mime.types
http.mime_types_filename = mime.types

# if use token to anti-steal(是否使用令牌反窃取)
# default value is false (0)
http.anti_steal.check_token = false

# token TTL (time to live), seconds( Token 生存时长)
# default value is 600
http.anti_steal.token_ttl = 900

# secret key to generate anti-steal token(生成反窃取 Token 的密钥)
# this parameter must be set when http.anti_steal.check_token set to true
# the length of the secret key should not exceed 128 bytes
http.anti_steal.secret_key = FastDFS1234567890

# return the content of the file when check token fail(Token检测失败的返回内容)
# default value is empty (no file sepecified)
http.anti_steal.token_check_fail = /home/yuqing/fastdfs/conf/anti-steal.jpg

# if support multi regions for HTTP Range(是否支持多区域HTTP范围)
# default value is true
http.multi_range.enabed = true

tracker.conf

Tracker Server 配置文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
# is this config file disabled(是否禁用此配置文件)
# false for enabled
# true for disabled
disabled = false

# bind an address of this host(为此主机绑定一个IP)
# empty for bind all addresses of this host(为空则表示绑定此主机的所有IP)
bind_addr =

# the tracker server port(端口)
port = 22122

# connect timeout in seconds(连接超时)
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5

# network timeout in seconds for send and recv(网络超时)
# default value is 30
network_timeout = 60

# the base path to store data and log files(存放数据和日志)
base_path = /home/yuqing/fastdfs

# max concurrent connections this server support(服务支持的最大连接数)
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024

# accept thread count(接受的线程数)
# default value is 1 which is recommended
# since V4.07
accept_threads = 1

# work thread count(工作线程数)
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4

# the min network buff size(最小网络缓冲区)
# default value 8KB
min_buff_size = 8KB

# the max network buff size(最大网络缓冲区)
# default value 128KB
max_buff_size = 128KB

# the method for selecting group to upload files(选择组上传文件策略)
# 0: round robin 随机
# 1: specify group 指定
# 2: load balance, select the max free space group to upload file(负载均衡,选择空闲空间最大的组上传文件)
store_lookup = 2

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group = group2

# which storage server to upload file(择 storager server 上传文件策略)
# 0: round robin (default) 随机
# 1: the first server order by ip address 根据IP地址排序
# 2: the first server order by priority (the minimal) 根据优先级排序
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server = 0

# which path (means disk or mount point) of the storage server to upload file(storage server 上传文件的路径,指的是硬盘 或 挂载节点)
# 0: round robin 随机
# 2: load balance, select the max free space path to upload file (负载均衡,选择空闲空间最大的)
store_path = 0

# which storage server to download file(storage server 下载策略)
# 0: round robin (default) 随机
# 1: the source storage server which the current file uploaded to(当前文件上传的存储服务)
download_server = 0

# reserved storage space for system or other applications.(为系统或其他应用程序保留的存储空间)。
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space, no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as: reserved_storage_space = 10%
reserved_storage_space = 20%

#standard log level as syslog, case insensitive, value list: (标准日志级别,不区分大小写)
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info

#unix group name to run this program, 运行此应用的用户组
#not set (empty) means run by the group of current user 为空表示为当前用户组
run_by_group=

#unix username to run this program, 运行此应用的用户
#not set (empty) means run by current user 为空表示为当前用户
run_by_user =

# allow_hosts can ocur more than once, host can be hostname or ip address,(允许多个 HOST,可以是主机名或IP地址,支持多种配置方式)
# "*" (only one asterisk) means match all ip addresses * (匹配所有 IP 地址)
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *

# sync log buff to disk every interval seconds (同步日志到磁盘间隔时间)
# default value is 10 seconds
sync_log_buff_interval = 1

# check storage server alive interval seconds (检测存储服务存活间隔时间)
check_active_interval = 120

# thread stack size, should >= 64KB 线程栈大小
# default value is 256KB
thread_stack_size = 256KB

# auto adjust when the ip address of the storage server changed (当 storage server 的 IP 地址改变时自动判断)
# default value is true
storage_ip_changed_auto_adjust = true

# storage sync file max delay seconds (存储同步文件最大延迟时间)
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400

# the max time of storage sync a file (存储同步文件的最大时间)
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300

# if use a trunk file to store several small files (是否使用 trunk 文件存放一些小文件)
# default value is false
# since V3.00
use_trunk_file = false

# the min slot size, should <= 4KB 最小的槽
# default value is 256 bytes
# since V3.00
slot_min_size = 256

# the max slot size, should > slot_min_size 最大的槽
# store the upload file to trunk file when it's size <= this value
# default value is 16MB
# since V3.00
slot_max_size = 1MB

# the alignment size to allocate the trunk space 分配中继空间的对齐大小
# default value is 0 (never align)
# since V6.05
# NOTE: the larger the alignment size, the less likely of disk
# fragmentation, but the more space is wasted.
# 对齐大小越大,磁盘碎片的可能性越小,但浪费的空间也更多。
trunk_alloc_alignment_size = 256

# if merge contiguous free spaces of trunk file 是否合并中继文件的连续可用空间
# default value is false
# since V6.05
trunk_free_space_merge = true

# if delete / reclaim the unused trunk files 删除/回收未使用的中继文件
# default value is false
# since V6.05
delete_unused_trunk_files = false

# the trunk file size, should >= 4MB (trunk file 大小)
# default value is 64MB
# since V3.00
trunk_file_size = 64MB

# if create trunk file advancely 是否提前创建 trunk file
# default value is false
# since V3.06
trunk_create_file_advance = false

# the time base to create trunk file 创建 trunk file 文件时间
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00

# the interval of create trunk file, unit: second (创建 trunk file 时间间隔, 86400 即隔一天)
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400

# the threshold to create trunk file(创建 trunk file 阀值)
# when the free trunk file size less than the threshold,(当空闲的 trunk file 大小小于阀值,则创建 trunk file)
# will create he trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G

# if check trunk space occupying when loading trunk free spaces(如果加载 trunk 空闲空间而检测到已被占用,则忽略)
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false

# if ignore storage_trunk.dat, reload from trunk binlog(是否忽略 storage_trunk.dat, 从 trunk 二进制日志重新加载)
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false

# the min interval for compressing the trunk binlog file(压缩 trunk binlog 文件的最小间隔)
# unit: second, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V5.01
trunk_compress_binlog_min_interval = 86400

# the interval for compressing the trunk binlog file(压缩 trunk binlog 文件的间隔时间)
# unit: second, 0 means never compress
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V6.05
trunk_compress_binlog_interval = 86400

# compress the trunk binlog time base, time format: Hour:Minute(压缩 trunk binlog 文件的时间点)
# Hour from 0 to 23, Minute from 0 to 59
# default value is 03:00
# since V6.05
trunk_compress_binlog_time_base = 03:00

# max backups for the trunk binlog file(trunk binlog 文件最大备份数)
# default value is 0 (never backup)
# since V6.05
trunk_binlog_max_backups = 7

# if use storage server ID instead of IP address(是否使用 storage server id 代替 IP 地址)
# if you want to use dual IPs for storage server, you MUST set
# this parameter to true, and configure the dual IPs in the file
# configured by following item "storage_ids_filename", such as storage_ids.conf
# default value is false
# since V4.00
use_storage_id = false

# (指定 sttorage id 文件, 可以是相对路径 或 绝对路径)
# specify storage ids filename, can use relative or absolute path
# this parameter is valid only when use_storage_id set to true
# since V4.00
storage_ids_filename = storage_ids.conf

# (存储服务ID文件中的 ID 类型)
# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = id

# (存储从文件是否使用符号链接)
# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false

# (是否每天旋转日志)
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# (旋转日期时间点)
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00

# (是否使用 gzip 压给它旧的错误日志)
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false

# (压缩几天前的错误日志)
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7

# (日志文件超过此大小时旋转错误日志)
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# (日志保留天数)
# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# (是否使用连接池)
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true

# (连接空闲时间,超过则被关闭)
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# (tracker server 端口)
# HTTP port on this tracker server
http.server_port = 8080

# (检查 storage HTTP server 存活的间隔时间)
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval = 30

# (检查存储 HTTP Server 存活类型)
# check storage HTTP server alive type, values are:
# tcp : connect to the storge server with HTTP port only,
# do not request and get response(只连接 HTTP 端口)
# http: storage check alive url must return http status 200(返回 200)
# default value is tcp
http.check_alive_type = tcp

# check storage HTTP server alive uri/url(检查 HTTP 服务存府的 URI/URL)
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri = /status.html

storage.conf

Storage Server 存储服务配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
# is this config file disabled
# false for enabled
# true for disabled
# (是否禁用此配置文件)
disabled = false

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configured correctly.
# (此存储服务的组名,注释或移除此项,则跟踪服务的配置文件中 use_storage_id 项必须设置为 true,
# 且 storage_ids.conf 必须配置正确)
group_name = group1

# bind an address of this host
# empty for bind all addresses of this host
# (为此主机绑定一个地址,空表示绑定此主机的所有地址)
bind_addr =

# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configured by the above parameter: "bind_addr"
# false for binding any address of this host
# (当连接到其它服务时,是否为此主机绑定地址,此存储服务做为一个客户端)
client_bind = true

# the storage server port
# (存储服务端口)
port = 23000

# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
# (连接时,默认 30, 内网 2 秒足够)
connect_timeout = 5

# network timeout in seconds for send and recv
# default value is 30
# (网络超时)
network_timeout = 60

# the heart beat interval in seconds
# the storage server send heartbeat to tracker server periodically
# default value is 30
# (发送心跳到跟踪服务的间隔时间)
heart_beat_interval = 30

# disk usage report interval in seconds
# the storage server send disk usage report to tracker server periodically
# default value is 300
# (发送硬盘使用的报告间隔时间)
stat_report_interval = 60

# the base path to store data and log files
# NOTE: the binlog files maybe are large, make sure
# the base path has enough disk space,
# eg. the disk free space should > 50GB
# (存放数据和日志文档)
base_path = /home/yuqing/fastdfs

# max concurrent connections the server supported,
# you should set this parameter larger, eg. 10240
# default value is 256
# (服务支持最大并发连接数)
max_connections = 1024

# the buff size to recv / send data from/to network
# this parameter must more than 8KB
# 256KB or 512KB is recommended
# default value is 64KB
# since V2.00
# (接收和发送数据的绑冲区大小)
buff_size = 256KB

# accept thread count
# default value is 1 which is recommended
# since V4.07
# (接受的线程数)
accept_threads = 1

# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
# (工作线程数)
work_threads = 4

# if disk read / write separated
## false for mixed read and write
## true for separated read and write
# default value is true
# since V2.00
# (是否开启硬盘读写分离)
disk_rw_separated = true

# disk reader thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
# (每个存储路径的硬盘读线程数)
disk_reader_threads = 1

# disk writer thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
# (每个存储路径的硬盘写线程数)
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
# (如果没有要同步的元素,会在X毫秒后再次尝试读取binlog)
sync_wait_msec = 50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
# (在同步一个文件后,调用 usleep 休眠毫秒)
sync_interval = 0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# (同步开始时间)
sync_start_time = 00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# (同步结束时间)
sync_end_time = 23:59

# write to the mark file after sync N files
# default value is 500
# (同步 N 个文件后就写入标记文件)
write_mark_file_freq = 500

# disk recovery thread count
# default value is 1
# since V6.04
# (硬盘恢复线程数)
disk_recovery_threads = 3

# store path (disk or mount point) count, default value is 1
# (存储路径(硬盘或挂载节点))
store_path_count = 1

# store_path#, based on 0, to configure the store paths to store files
# if store_path0 not exists, it's value is base_path (NOT recommended)
# the paths must be exist.
#
# IMPORTANT NOTE:
# the store paths' order is very important, don't mess up!!!
# the base_path should be independent (different) of the store paths
# (存储路径, 从 0 开始, 如果 store_path0 不存在, 则使用 base_path (不推荐) )
store_path0 = /home/yuqing/fastdfs
#store_path1 = /home/yuqing/fastdfs2

# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
# (在 store_path 中创建的最大子目录个数)
subdir_count_per_path = 256

# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
# (tracker server 服务地址, 可以配置多个)
tracker_server = 192.168.209.121:22122
tracker_server = 192.168.209.122:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
# (日志级别)
log_level = info

#unix group name to run this program,
#not set (empty) means run by the group of current user
# (运行此应用的 unix 组名, 空则表示为当前用的组)
run_by_group =

#unix username to run this program,
#not set (empty) means run by current user
# (运行此应用的用户名, 空则表示为当前用户)
run_by_user =

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
# (允许的主机, 可以是IP或主机名)
allow_hosts = *

# the mode of the files distributed to the data path
# 0: round robin(default) 随机
# 1: random, distributted by hash code 根据HASH CODE 分发
# (文件分布到数据路径的模式)
file_distribute_path_mode = 0

# valid when file_distribute_to_path is set to 0 (round robin).
# when the written file count reaches this number, then rotate to next path.
# rotate to the first path (00/00) after the last path (such as FF/FF).
# default value is 100
# (当 file_distribute_to_path 设置为 0 时有效, 当写文件事达到此数量时,则轮转使用下一个路径)
file_distribute_rotate_count = 100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
# (当写大文件时, 调用 fsync 刷入到硬盘)
fsync_after_written_bytes = 0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
# (同步日志缓冲区到硬盘的间隔时间)
sync_log_buff_interval = 1

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
# (同步二进制日志缓冲区/缓存 到硬盘的间隔时间)
sync_binlog_buff_interval = 1

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
# (同步存储的统计信息一到硬盘的间隔时间)
sync_stat_file_interval = 300

# thread stack size, should >= 512KB
# default value is 512KB
# (线程栈大小)
thread_stack_size = 512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
# (上传优先级,值越小,优先级越高)
upload_priority = 10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
# (NIC别名前缀,即网卡别名,多个使用逗号分隔)
if_alias_prefix =

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
# (是否检查重复文件, true, 使用 FastDHT 存储文件索引)
check_file_duplicate = 0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
# (用于检测是否重复文件的签名方式)
file_signature_method = hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
# (存放文件索引的名称空间, 当 check_file_duplicate = true / on 时必须设置)
key_namespace = FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)默认 0, 短连接
# (设置是否启用与 FastDHT 服务的持久化连接)
keep_alive = 0

# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
# (使用 include filename 命令指定 FastDHT 服务配置)
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
# (是否记录访问日志)
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
# (是否每天轮转访问日志)
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
# (访问日志轮转时间)
access_log_rotate_time = 00:00

# if compress the old access log by gzip
# default value is false
# since V6.04
# (是否压缩旧的访问日志)
compress_old_access_log = false

# compress the access log days before
# default value is 1
# since V6.04
# (压缩几天前的访问日期)
compress_access_log_days_before = 7

# if rotate the error log every day
# default value is false
# since V4.02
# (是否每天轮转错误日志)
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
# (错误日志轮转时间)
error_log_rotate_time = 00:00

# if compress the old error log by gzip
# default value is false
# since V6.04
# (压缩旧的错误日志)
compress_old_error_log = false

# compress the error log days before
# default value is 1
# since V6.04
# (压给它几天前的错误日志)
compress_error_log_days_before = 7

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
# (根据日志文件大小轮转日志)
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
# (根据文件大小轮转错误日志)
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
# (日志文件保留几天)
log_file_keep_days = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
# (是否跳过无效的记录,当同步文件时)
file_sync_skip_invalid_record = false

# if use connection pool
# default value is false
# since V4.05
# (是否使用连接池)
use_connection_pool = true

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
# (连接空闲时长,超过则连接被关闭)
connection_pool_max_idle_time = 3600

# if compress the binlog files by gzip
# default value is false
# since V6.01
# (是否使用 gzip 压给它二进制日志文件)
compress_binlog = true

# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
# (压给它二进制日志时间点)
compress_binlog_time = 01:30

# if check the mark of store path to prevent confusion
# recommend to set this parameter to true
# if two storage servers (instances) MUST use a same store path for
# some specific purposes, you should set this parameter to false
# default value is true
# since V6.03
# (是否检查存储路径的标记以防止混淆,建议开启,如果两个服务使用一个相同的存储路径,此参数要设置为 false)
check_store_path_mark = true

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
# (服务域名, 如果为空则表示使用 IP 地址)
http.domain_name =

# the port of the web server on this storage server
# (http 端口)
http.server_port = 8888

storage_ids.conf

Storage Server 存储服务的 ID 和 组名 配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# <id>  <group_name>  <ip_or_hostname[:port]>
#
# id is a natural number (1, 2, 3 etc.),(id 是个自然数,最大长度 6 位)
# 6 bits of the id length is enough, such as 100001
#
# (storage ip 或 域名可以是逗号分隔的双IP,一个内网一个外网,或两个内网)
# storage ip or hostname can be dual IPs seperated by comma,
# one is an inner (intranet) IP and another is an outer (extranet) IP,
# or two different types of inner (intranet) IPs
# for example: 192.168.2.100,122.244.141.46
# another eg.: 192.168.1.10,172.17.4.21
#
# (端口是可选项, 如果在一个服务上运行多个存储服务实例, 则必须指定端口以区分不同的实例)
# the port is optional. if you run more than one storaged instances
# in a server, you must specified the port to distinguish different instances.

100001 group1 192.168.0.196
100002 group1 192.168.0.197

Spring Boot集成

基于 Spring Boot 集成 FastDFS 客户端,实现上传和下载功能。

添加依赖

fastdfs-client-java 最新 jar 包无法在线下载,可下载源码打包安装到本地的 Maven 仓库。

1
2
3
4
5
6
7
8
9
10
11
<!--FastDFS-->
<dependency>
<groupId>org.csource</groupId>
<artifactId>fastdfs-client-java</artifactId>
<version>1.29-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>

配置文件

详细可查看官方描述:https://gitee.com/fastdfs100/fastdfs-client-java ,下面示例是使用一个 Properties 属性配置文件。

在 Spring Boot 类路径(resources)下创建一个 fastdfs-client.properties 属性文件,也可以是其它文件名,属性如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
fastdfs.connect_timeout_in_seconds = 5
fastdfs.network_timeout_in_seconds = 30
fastdfs.charset = UTF-8
fastdfs.http_anti_steal_token = false
fastdfs.http_secret_key = FastDFS1234567890
fastdfs.http_tracker_http_port = 80

fastdfs.tracker_servers = 192.168.50.129:22122,192.168.50.132:22122

fastdfs.connection_pool.enabled = true
fastdfs.connection_pool.max_count_per_entry = 500
fastdfs.connection_pool.max_idle_time = 3600
fastdfs.connection_pool.max_wait_time_in_ms = 1000

加载配置

可以在静态代码块中加载,还可以在初始化对象之前加载配置。下面示例将配置初始化和上传下载做到一个客户端 Bean中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
/**
* FastDFS 客户端
*/
@Configuration
public class FastDFSClient {

public static final String CONFIG_PROPERTIES = "fastdfs-client.properties";
public static final String GROUP_NAME = "group1";
private static TrackerClient trackerClient = null;
private static TrackerServer trackerServer = null;
private static StorageServer storageServer = null;
private static StorageClient1 storageClient = null;

static {
try {
ClientGlobal.initByProperties(CONFIG_PROPERTIES);
System.out.println(ClientGlobal.configInfo());
trackerClient = new TrackerClient(ClientGlobal.g_tracker_group);
trackerServer = trackerClient.getTrackerServer();
storageServer = trackerClient.getStoreStorage(trackerServer);
storageClient = new StorageClient1(trackerServer, storageServer);
} catch (IOException e) {
e.printStackTrace();
} catch (MyException e) {
e.printStackTrace();
}
}


/**
* 初始化配置
*
* @throws IOException
* @throws MyException
*/
/*@PostConstruct
public void clientGlobal() throws IOException, MyException {
ClientGlobal.initByProperties(CONFIG_PROPERTIES);
//System.out.println(ClientGlobal.configInfo());
this.trackerClient = new TrackerClient(ClientGlobal.g_tracker_group);
this.trackerServer = this.trackerClient.getTrackerServer();
this.storageServer = this.trackerClient.getStoreStorage(this.trackerServer);
this.storageClient = new StorageClient1(trackerServer, storageServer);
}*/

/**
* 文件上传
*
* @param file 文件
* @param fileExtName 扩展名
* @param metaList 无数据
* @return
* @throws IOException
* @throws MyException
*/
public String uploadFile(File file, String fileExtName, Map<String, String> metaList) throws IOException, MyException {
byte[] buff = IOUtils.toByteArray(new FileInputStream(file));
return uploadFile(buff, fileExtName, metaList);
}

public String uploadFile(byte[] buff, String fileExtName, Map<String, String> metaList) throws IOException, MyException {
NameValuePair[] nameValuePairs = null;
if (metaList != null) {
nameValuePairs = new NameValuePair[metaList.size()];
int index = 0;
for (Iterator<Map.Entry<String, String>> iterator = metaList.entrySet().iterator(); iterator.hasNext(); ) {
Map.Entry<String, String> entry = iterator.next();
String name = entry.getKey();
String value = entry.getValue();
nameValuePairs[index++] = new NameValuePair(name, value);
}
}
return storageClient.upload_file1(GROUP_NAME, buff, fileExtName, nameValuePairs);
}

/**
* 获取文件元数据
*
* @param groupName
* @param fileId
* @return
*/
public Map<String, String> getFileMetadata(String groupName, String fileId) {
try {
if (StringUtils.isEmpty(groupName)) {
groupName = GROUP_NAME;
}
NameValuePair[] metaList = storageClient.get_metadata(groupName, fileId);
if (metaList != null) {
HashMap<String, String> map = new HashMap<String, String>();
for (NameValuePair metaItem : metaList) {
map.put(metaItem.getName(), metaItem.getValue());
}
return map;
}
} catch (Exception e) {
e.printStackTrace();
}
return null;
}

/**
* 下载文件
*
* @param groupName
* @param remoteFilename
* @param outFile
* @return
*/
public int downloadFile(String groupName, String remoteFilename, File outFile) {
FileOutputStream fos = null;
try {
if (StringUtils.isEmpty(groupName)) {
groupName = GROUP_NAME;
}
byte[] content = storageClient.download_file(groupName, remoteFilename);
fos = new FileOutputStream(outFile);
InputStream ips = new ByteArrayInputStream(content);
IOUtils.copy(ips, fos);
return 0;
} catch (Exception e) {
e.printStackTrace();
} finally {
if (fos != null) {
try {
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return -1;
}

/**
* 删除文件
*
* @param groupName
* @param remoteFilename
* @return
*/
public int deleteFile(String groupName, String remoteFilename) {
try {
return storageClient.delete_file(groupName, remoteFilename);
} catch (Exception e) {
e.printStackTrace();
}
return -1;
}

public TrackerClient getTrackerClient() {
return trackerClient;
}

public TrackerServer getTrackerServer() {
return trackerServer;
}

public StorageServer getStorageServer() {
return storageServer;
}

public StorageClient getStorageClient() {
return storageClient;
}
}

上传下载

模拟前端上传文件,下载文件。创建 Controller 接上前端上传文件的请求,下载的请求。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
@RestController
@RequestMapping("/file")
public class FileController {

@Autowired
private FastDFSClient fastDFSClient;

@PostMapping("/upload")
public String fileUpload(HttpServletRequest request, MultipartFile file) throws IOException, MyException {

byte[] fileBytes = file.getBytes();
/*System.out.println(strArray.length);
System.out.println("groupName:" + strArray[0]);
System.out.println("remoteFilename:" + strArray[1]);
for (int i = 0; i < strArray.length; i++) {
System.out.println(strArray[i]);
}*/


String fileName = file.getOriginalFilename();
int index = fileName.lastIndexOf(".");
String suffix = fileName.substring(index + 1);
//注意:第二个参数是扩展名,不是文件名
String remoteUrl = fastDFSClient.uploadFile(fileBytes, suffix, null);

StringBuilder fileUrl = new StringBuilder("http://");
String hostString = fastDFSClient.getTrackerServer().getInetSocketAddress().getHostString();
fileUrl.append(hostString).append("/").append(remoteUrl);
return fileUrl.toString();
}

@PostMapping("/download")
public void fileDownload(String groupName, String remoteFilename) throws Exception {
int index = remoteFilename.lastIndexOf("/");
String fileName = remoteFilename.substring(index + 1);
File file = new File("D:\\" + fileName);
int result = fastDFSClient.downloadFile(groupName, remoteFilename, file);
System.out.println(result);
}

}

上传返回的结果:

1
http://192.168.50.132/group1/M00/00/01/wKgyhF6no22AdiH_AABHkyXi0i0719.png

Nginx配置

安装Nginx

多个服务集群部署,通过 Nginx 的反向代理来实现负载均衡。FastDFS 最近的版本已不支持将上传文件返回的 URL 放在浏览器中预览,若要实现,可配置 Nginx 将请求路径映射到存储的路径。

参考 Nginx系列(一):Linux 环境安装 Nginx

fastdfs-nginx-module

安装 Nginx 扩展模块:fastdfs-nginx-module

FastDFS 多个存储节点之间文件同步会可能存在一定的延迟,例如网络延迟,写延迟。当通过 Nginx 反向代理到还未完成同步的节点,则会访问不到文件,而 fastdfs-nginx-module就是为了解决此问题,当出现此问题时,会重新定位到文件同步的源节点(上传存储的原始节点)。

下载

下载fastdfs-nginx-module 源码:

1
2
3
4
5
6
7
github : https://github.com/happyfish100/fastdfs-nginx-module
gitee : https://gitee.com/fastdfs100/fastdfs-nginx-module.git
command lines as (YOUR_PATH is your base path eg. /home/yuqing ):

cd $YOUR_PATH
git clone https://github.com/happyfish100/fastdfs-nginx-module
cd fastdfs-nginx-module; git checkout V1.22

编译安装

编译和安装 fastdfs-nginx-module 模块:

1
2
3
cd nginx-1.16.1
./configure --add-module=$YOUR_PATH/fastdfs-nginx-module/src
make; make install

修改配置

修改 Nginx 配置文件 nginx.conf,添加路径映射,用于支持浏览器预览。

1
2
3
4
5
6
7
8
9
10
11
12
Notice:
* replace $YOUR_PATH with your fastdfs-nginx-module base path, such as /home/yuqing
* before compile, you can change FDFS_OUTPUT_CHUNK_SIZE and
FDFS_MOD_CONF_FILENAME macro in the config file as:
CFLAGS="$CFLAGS -D_FILE_OFFSET_BITS=64 -DFDFS_OUTPUT_CHUNK_SIZE='256*1024' -DFDFS_MOD_CONF_FILENAME='\"/etc/fdfs/mod_fastdfs.conf\"'"

#step 5. config the nginx config file such as nginx.conf, add the following lines:
# 修改 nginx 配置文件 nginx.conf
location /M00 {
root /home/yuqing/fastdfs/data;
ngx_fastdfs_module;
}

创建软链接

1
2
3
#step 6. make a symbol link ${fastdfs_base_path}/data/M00 to ${fastdfs_base_path}/data,
# 创建软链接, command line such as:
ln -s /home/yuqing/fastdfs/data /home/yuqing/fastdfs/data/M00

复制配置文件

http.confmime.types 两个文件在修改 FastDFS 时已经执行了配置文件的安装,这里可以省略

1
2
3
4
5
#step 7. copy conf/http.conf and conf/mime.types in FastDFS source path to /etc/fdfs/ and modify http.conf, such as:
cd /home/yuqing/fastdfs
cp conf/http.conf conf/mime.types /etc/fdfs/

#step 8. copy mod_fastdfs.conf to /etc/fdfs/ and modify it

启动Nginx服务

1
2
#step 9. restart the nginx server, such as:
/usr/local/nginx/sbin/nginx -s stop; /usr/local/nginx/sbin/nginx

监控错误日志

1
2
#step 10. view nginx log file, such as:
tail -n 100 /usr/local/logs/error.log

其它参考

  1. 分布式文件系统,FastDFS集群搭建与实战
  2. 架构之路搭建FastDFS分布式文件系统

Spring Boot 2系列(五十四):分布式文件服务 FastDFS 安装与集成

http://blog.gxitsky.com/2020/04/20/SpringBoot-54-FastDFS/

作者

光星

发布于

2020-04-20

更新于

2022-06-17

许可协议

评论