使用Composer和Docker创建Swoole项目

最近有一个小功能使用swoole扩展来开发,并没有使用框架,从零开始。这里采用类用似Symfony的目录结构:

.
├── bin
├── config
├── src
├── tests
├── vendor
├── phpunit.xml
├── composer.json
├── .gitignore
├── Dockerfile
├── LICENSE
├── phpunit.xml
└── README.md

项目代码放置在src下面,测试代码则在tests里面,config目录存放配置文件,bin文件夹则是一些命令行工具,vendor目录则是composer安装第三方依赖库的目录。composer.json是借鉴k911/swoole-bundle,内容如下

{
    "name": "llitllie/swoole-sample",
    "type": "library",
    "description": "Swoole Sample",
    "keywords": [
        "Swoole"
    ],
    "license": "MIT",
    "homepage": "https://github.com/llitllie/swoole-sample.git",
    "authors": [{
        "name": "llitllie",
        "email": "xxx@yyy.zzz",
        "homepage": "https://github.com/llitllie/swoole-sample.git"
    }],
    "require": {
        "php": "^7.2",
        "ext-swoole": "^4.3.4"
    },
    "require-dev": {
        "phpunit/phpunit": "^8",
        "phpstan/phpstan": "^0.11.8",
        "friendsofphp/php-cs-fixer": "^2.15",
        "swoole/ide-helper": "@dev"
    },
    "scripts": {
        "static-analyse-src": [
            "phpstan analyze src -l 7 --ansi"
        ],
        "cs-analyse": [
            "php-cs-fixer fix -v --dry-run --diff --stop-on-violation --ansi"
        ],
        "analyse": [
            "@static-analyse-src",
            "@cs-analyse"
        ],
        "test": [
            "@analyse",
            "@unit-tests"
        ],
        "unit-tests": [
            "phpunit tests --testdox --colors=always"
        ],
        "fix": "php-cs-fixer fix -v --ansi"
    },
    "suggest": {
        "ext-uv": "^0.2.4",
        "ext-ev": "^1.0.6"
    }
}

这里require里面指定了PHP版本>=7.2和Swoole扩展,composer安装时会自动检查。require-dev里面指定了单元测试工具phpunit、代码分析工具phpstan和代码风格检查php-cx-fixer。phpstan可以帮助分析代码,检测语法,在代码运行前发现问题。php-cx-fixer则可以帮忙格式化代码,保持代码统一风格。scripts里面定义了调用它们的命令,可以使用composer运行

$ ls -la vendor/bin/
total 4
drwxrwxr-x.  2 vagrant vagrant   69 Jul  1 07:24 .
drwxrwxr-x. 24 vagrant vagrant 4096 Jul  1 07:25 ..
lrwxrwxrwx.  1 vagrant vagrant   41 Jul  1 07:24 php-cs-fixer -> ../friendsofphp/php-cs-fixer/php-cs-fixer
lrwxrwxrwx.  1 vagrant vagrant   33 Jul  1 07:24 php-parse -> ../nikic/php-parser/bin/php-parse
lrwxrwxrwx.  1 vagrant vagrant   30 Jul  1 07:24 phpstan -> ../phpstan/phpstan/bin/phpstan
lrwxrwxrwx.  1 vagrant vagrant   26 Jul  1 07:24 phpunit -> ../phpunit/phpunit/phpunit
$ ./vendor/bin/phpunit --testdox tests

$ composer test
> phpstan analyze src -l 7 --ansi
 7/8 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░]  87%

 8/8 [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓] 100%                                                                                
 [OK] No errors                                                                 
                                                                                

> php-cs-fixer fix -v --dry-run --diff --stop-on-violation --ansi
Loaded config default from "/home/ticket/.php_cs.dist".
Using cache file ".php_cs.cache".
SSSSSSSSSSS
Legend: ?-unknown, I-invalid file syntax, file ignored, S-Skipped, .-no changes, F-fixed, E-error

Checked all files in 0.070 seconds, 6.000 MB memory used
> phpunit tests --testdox --colors=always
PHPUnit 8.2.4 by Sebastian Bergmann and contributors.

Sim\Ticket\Node\Zookeeper
 ✓ Get id

Sim\Ticket\Number
 ✓ Load
 ✓ Get timestamp
 ✓ Get node id
 ✓ Generate
 ✓ Generate with zookeeper node

Sim\Zookeeper\Client
 ✓ Zookeeper
 ✓ Zookeeper extension

Time: 841 ms, Memory: 4.00 MB

OK (8 tests, 23 assertions)

phpuit.xml里面则是一些测试配置,可以在里面定义自动加载和变量

<?xml version="1.0" encoding="UTF-8"?>

<!-- https://phpunit.readthedocs.io/en/7.3/configuration.html -->
<phpunit xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:noNamespaceSchemaLocation="https://schema.phpunit.de/7.3/phpunit.xsd"
         colors="true"
         bootstrap="vendor/autoload.php"
>
    <filter>
        <whitelist processUncoveredFilesFromWhitelist="true">
            <directory suffix=".php">src/</directory>
        </whitelist>
    </filter>
    <testsuites>
        <testsuite name="Unit tests suite">
            <directory>tests</directory>
        </testsuite>
    </testsuites>
    <php>
        <includePath>.</includePath>
        <const name="SERVICE" value="192.168.33.1"/>
        <env name="SERVICE" value="192.168.33.1"/>
    </php>
</phpunit>

PHP Composer支持使用已有的代码作为模板,快速克隆创建一个新项目,方便重复使用。将代码保存到GitHub上面然后在packagist.org上面提交就可以了。使用composer create-project:

composer create-project llitllie/swoole-sample example dev-master

这里使用Docker来构建运行代码,Dockerfile内容如下

ARG PHP_TAG="7.2-cli-alpine3.9"

FROM php:$PHP_TAG

ENV COMPOSER_ALLOW_SUPERUSER 1

RUN set -ex \
  	&& apk update \
    && apk add --no-cache --virtual .build-deps curl gcc g++ make build-base autoconf \
    && apk add libstdc++ openssl-dev libffi-dev \
    && docker-php-ext-install sockets \
    && docker-php-source extract \
    && printf "yes\nyes\nno\nyes\nno\n" | pecl install swoole \
    && docker-php-ext-enable swoole \
    && docker-php-source delete \
    && curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer \
    && apk del .build-deps \
    && rm -rf /tmp/* 

WORKDIR /usr/src/app
COPY . ./
ARG COMPOSER_ARGS="install"
RUN composer ${COMPOSER_ARGS} --prefer-dist --ignore-platform-reqs --no-progress --no-suggest --no-scripts --ansi

EXPOSE 9501
CMD ["php", "bin/server.php"]

这里声明了swoole将会监听9501端口,默认运行bin目录下面的serverphp文件。使用pecl安装swoole会要求回答几个问题,比如是否启用socket/http2/mysqlnd等等,这里使用printf输出答案。构建,然后运行

docker build  -t llitllie/swoole-project .
docker run --name swoole -d -p 9501:9501 llitllie/swoole-project
#docker run --name web -dit -p 9501:9501 --mount type=bind,source=/Users/vagrant/example,target=/opt/app llitllie/swoole-project /bin/sh

这样一个swoole项目模板算是好了,可以继续往composer文件里面添加需要的库或脚本;也可以继承在已经构建好的docker镜像基础上继续添加所需的软件,更改监听端口等等。
可以在Docker Cloud上面配置对应的git,并配置对应自动化构建规则,这样就可以方便的自动构建了。

Jupyter Notebook

对于编程初学者,如果有一个开箱即用的环境,比如web页面,就可以进行编程交互,那是极友好。有时候我们想在远程服务器上执行一些脚本,输出一些结果,比如科学计算;有时候又想在服务器上执行一些命令但又不能直接登录服务器,如果能够在web界面上操作或作为跳板机,那也是极友好的。Jupyter Notebook是基于IPython的一个基于web交互执行的在线环境,支持Python,也支持其他编程语言,比如Julia和R。所创建Notebook文档可以自动保存执行过的代码、结果,方便进行回放。
Jupyter Notebok的安装很方便,可以使用Anaconda来安装,或者手动安装。Python3下手动安装,

pip3 install jupyter
export PATH=$PATH:/usr/local/python3/bin

查看一下

[root@localhost local]# pip3 show jupyter
Name: jupyter
Version: 1.0.0
Summary: Jupyter metapackage. Install all the Jupyter components in one go.
Home-page: http://jupyter.org
Author: Jupyter Development Team
Author-email: jupyter@googlegroups.org
License: BSD
Location: /usr/local/python3/lib/python3.7/site-packages
Requires: jupyter-console, notebook, ipywidgets, nbconvert, qtconsole, ipykernel
Required-by: 

如果直接运行jupyter notebook,那么会生成一个本地可以访问的带token的url,每次都不一样,不是很方便。设置密码,以便登录

[root@localhost opt]# jupyter notebook password
Enter password: 
Verify password: 
[NotebookPasswordApp] Wrote hashed password to /root/.jupyter/jupyter_notebook_config.json
[root@localhost bin]# cat /root/.jupyter/jupyter_notebook_config.json 
{
  "NotebookApp": {
    "password": "sha1:e04153005102:961b12eef91987a06b497f915fc3f18c62d8f714"
  }

由于是在虚拟机里面,我们并不需要Jupyter自动打开浏览器,但需要监听来自任意IP的请求,指定端口9030。这里使用root用户运行Jupyter,默认是不允许的:

[root@localhost opt]# jupyter notebook --no-browser --allow-root --ip 0.0.0.0 --port 9030
[I 02:13:44.320 NotebookApp] Serving notebooks from local directory: /opt
[I 02:13:44.320 NotebookApp] The Jupyter Notebook is running at:
[I 02:13:44.320 NotebookApp] http://(localhost.localdomain or 127.0.0.1):9030/
[I 02:13:44.320 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 02:13:59.664 NotebookApp] 302 GET / (192.168.33.1) 1.22ms
[I 02:14:23.597 NotebookApp] Kernel started: 7ad63717-7a65-4dec-9d5a-9af654c28f75
[I 02:14:25.204 NotebookApp] Adapting to protocol v5.1 for kernel 7ad63717-7a65-4dec-9d5a-9af654c28f75
[I 02:14:37.350 NotebookApp] Starting buffering for 7ad63717-7a65-4dec-9d5a-9af654c28f75:ea68853b742c40f8bcf8745529ea95de
[I 02:14:43.735 NotebookApp] Kernel started: 5b569c8d-6936-4bd2-9674-0317c46948f6
[I 02:14:44.124 NotebookApp] Adapting to protocol v5.0 for kernel 5b569c8d-6936-4bd2-9674-0317c46948f6
[2019-06-03 02:14:43] kernel.DEBUG: Connection settings {"processId":6751,"connSettings":{"shell_port":39990,"iopub_port":48184,"stdin_port":40113,"control_port":43426,"hb_port":49075,"ip":"127.0.0.1","key":"d5f89bba-890ecf15e6b20718411170ad","transport":"tcp","signature_scheme":"hmac-sha256","kernel_name":"jupyter-php"},"connUris":{"stdin":"tcp://127.0.0.1:40113","control":"tcp://127.0.0.1:43426","hb":"tcp://127.0.0.1:49075","shell":"tcp://127.0.0.1:39990","iopub":"tcp://127.0.0.1:48184"}} []
[2019-06-03 02:14:44] KernelCore.DEBUG: Initialized sockets {"processId":6751} []

然后打开浏览器,访问http://192.168.33.70:9030,输入账号密码,就可以在web里面运行Python了

Jupyter默认带了SQL扩展,使用ipython-sql来执行,只需要安装对应的驱动,这里使用PyMySQL

python3 -m pip install PyMySQL

然后在Web里面执行就可以了

Jupyter还有其他扩展,参考这里
除了可以执行Python和SQL,Jupyter Notebook也可以支持其他语言,在这里列出了。通常执行方式是通过Bash执行,或者通过ZeroMQ来通信,参考这里实现。一个Jupyter的kernal将需要监听以下几个socket:

  • Shell:执行命令
  • IOPub:推送执行结果
  • Stdin:接收输入
  • Control:接收控制命令,比如关闭、终端
  • Heartbeat:心跳检测
  • 这个思路也可以用来做IOT设备的远程监控,交互执行。
    这里安装一下PHP 7这个kernel,作者甚至还提供了installer,但是首先要安装ZeroMQ以便与Jupyter服务通信

    yum install php-pecl-zmq
    wget https://litipk.github.io/Jupyter-PHP-Installer/dist/jupyter-php-installer.phar
    ./jupyter-php-installer.phar install
    

    查看安装文件

    [root@localhost opt]# ls -la /usr/local/share/jupyter/kernels/
    total 0
    drwxr-xr-x. 4 root root 34 May 10 06:10 .
    drwxr-xr-x. 3 root root 20 May  9 07:30 ..
    drwxr-xr-x. 2 root root 24 May  9 07:30 jupyter-php
    drwxr-xr-x. 2 root root 40 May 10 06:10 lgo
    
    [root@localhost opt]# cat /usr/local/share/jupyter/kernels/jupyter-php/kernel.json 
    {"argv":["php","\/opt\/jupyter-php\/pkgs\/vendor\/litipk\/jupyter-php\/src\/kernel.php","{connection_file}"],"display_name":"PHP","language":"php","env":{}}
    

    这个扩展使用了react/zmq来监听Jupyter请求,使用psysh来交互执行PHP代码。

    如果想要更改Jupyter的web模板,可以在以下目录找到

    [root@localhost vagrant]# ls -la /usr/local/python3/lib/python3.7/site-packages/notebook/templates
    total 92
    drwxr-xr-x.  2 root root  4096 May  9 06:33 .
    drwxr-xr-x. 19 root root  4096 May  9 06:33 ..
    -rw-r--r--.  1 root root   147 May  9 06:33 404.html
    -rw-r--r--.  1 root root   499 May  9 06:33 browser-open.html
    -rw-r--r--.  1 root root  4258 May  9 06:33 edit.html
    -rw-r--r--.  1 root root   856 May  9 06:33 error.html
    -rw-r--r--.  1 root root  4256 May  9 06:33 login.html
    -rw-r--r--.  1 root root  1179 May  9 06:33 logout.html
    -rw-r--r--.  1 root root 23162 May  9 06:33 notebook.html
    -rw-r--r--.  1 root root  6559 May  9 06:33 page.html
    -rw-r--r--.  1 root root  1089 May  9 06:33 terminal.html
    -rw-r--r--.  1 root root 12130 May  9 06:33 tree.html
    -rw-r--r--.  1 root root   544 May  9 06:33 view.html
    

    Jupyter Notebook Web前端采用WebSocket与服务器交互,服务器接收消息并转发给对应的kernel执行或控制,并将结果推送给前端。Jupyter Notebook也可以直接打开Terminal,在远程服务器上执行命令。注意这里的用户就是刚才运行jupyter的用户

    许多web terminal也都是采用WebSocket来做交互,比如xterm.jswebtty
    Juypter Notebook适合单用户(单机)使用,如果提供多用户使用(比如教学),可以使用Jupyter Hub,可以使用docker快捷部署。

    参考链接:
    Jupyter Notebook Extensions
    Jupyter – How do I decide which packages I need?
    PsySH——PHP交互式控制台
    Jupyter项目
    WebSocket 教程

    Nginx + Frp/Ngrok反向代理Webhook至本地

    跟第三方平台打交道,经常需要设置一个接受通知的Webhook,比如微信/Skype的回调。它们要求有一个可以在互联网上访问得了的入口,比如某个域名,如果是在本地开发的话,不好调试。通常使用花生壳来代理本地服务,但是花生壳有一些限制,比如端口。有些域名服务商,比如DNSPOD,Linode,提供相应的API,也可以自己搭建DDNS服务,但是也可能有端口访问限制。Frp/Ngrok都是Go语言开发的内网穿透工具,可以自己部署搭建。Frp是国人开发的一款反向代理软件,可以转发请求给位于NAT后面的机器,支持TCP,UDP,HTTP/HTTPS。Ngrok则是国外的一款内网穿透软件,也支持HTTP/HTTPS转发。这里使用Nginx作为反向代理服务器,接收互联网回调并转发给本地的Frp/Ngrok服务,由它们接收webhook请求并转发至本地开发环境。
    前面使用OpenVpn搭建了私有网络,可以在Nginx里面配置转发给目标机器就可以了

    vim /etc/nginx/conf.d/100-dev.example.conf
    

    内容如下

    server {
        listen 80;
        server_name dev.example.com;
        return 301 https://$host$request_uri;
    }
    
    server {
    
        listen 443;
        server_name dev.example.com;
    
        ssl_certificate           /etc/letsencrypt/live/example.com/cert.pem;
        ssl_certificate_key       /etc/letsencrypt/live/example.com/privkey.pem;
    
        ssl on;
        ssl_session_cache  builtin:1000  shared:SSL:10m;
        ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
        ssl_prefer_server_ciphers on;
    
        location / {
          proxy_set_header        Host $host;
          proxy_set_header        X-Real-IP $remote_addr;
          proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header        X-Forwarded-Proto $scheme;
    
          proxy_pass          http://10.9.0.2/;
          proxy_redirect off;
    
        }
    }
    

    这里使用了let’s encryt的泛域名证书,官方并没有对应的插件,但是DNSPOD有提供相应的API,第三方开发了一个插件自certbot-dns-dnspod,安装这个插件并且配置Dnspod的API Token:

    $ yum install certbot python2-certbot-nginx
    $ certbot --nginx
    $ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
    $ pip install certbot-dns-dnspod
    $ vim /etc/letsencrypt/dnspod.conf
    certbot_dns_dnspod:dns_dnspod_email = "123@163.com"
    certbot_dns_dnspod:dns_dnspod_api_token = "123,ca440********"
    
    $ chmod 600 /etc/letsencrypt/dnspod.conf
    

    手动请求证书

    $ certbot certonly -a certbot-dns-dnspod:dns-dnspod --certbot-dns-dnspod:dns-dnspod-credentials /etc/letsencrypt/dnspod.conf --server https://acme-v02.api.letsencrypt.org/directory -d example.com -d "*.example.com"
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator certbot-dns-dnspod:dns-dnspod, Installer None
    Starting new HTTPS connection (1): acme-v02.api.letsencrypt.org
    Obtaining a new certificate
    Performing the following challenges:
    dns-01 challenge for example.com
    dns-01 challenge for example.com
    Starting new HTTPS connection (1): dnsapi.cn
    Waiting 10 seconds for DNS changes to propagate
    Waiting for verification...
    Cleaning up challenges
    Resetting dropped connection: acme-v02.api.letsencrypt.org
    
    IMPORTANT NOTES:
     - Congratulations! Your certificate and chain have been saved at:
       /etc/letsencrypt/live/example.com/fullchain.pem
       Your key file has been saved at:
       /etc/letsencrypt/live/example.com/privkey.pem
       Your cert will expire on 2019-08-04. To obtain a new or tweaked
       version of this certificate in the future, simply run certbot
       again. To non-interactively renew *all* of your certificates, run
       "certbot renew"
    */1 * * * * /usr/local/qcloud/stargate/admin/start.sh > /dev/null 2>&1 &
     - If you like Certbot, please consider supporting our work by:
    
       Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
       Donating to EFF:                    https://eff.org/donate-le
    
    
    $ ls -la /etc/letsencrypt/live/example.com/
    total 12
    drwxr-xr-x 2 root root 4096 May  6 12:06 .
    drwx------ 3 root root 4096 May  6 12:06 ..
    lrwxrwxrwx 1 root root   34 May  6 12:06 cert.pem -> ../../archive/example.com/cert1.pem
    lrwxrwxrwx 1 root root   35 May  6 12:06 chain.pem -> ../../archive/example.com/chain1.pem
    lrwxrwxrwx 1 root root   39 May  6 12:06 fullchain.pem -> ../../archive/example.com/fullchain1.pem
    lrwxrwxrwx 1 root root   37 May  6 12:06 privkey.pem -> ../../archive/example.com/privkey1.pem
    -rw-r--r-- 1 root root  692 May  6 12:06 README
    
    

    配置证书自动更新

    0 0,12 * * * python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew
    

    Frp的开发者已经提供了编译好的frp服务端和客户端,下载即可使用。这里使用docker来运行Frp服务,使用这个Dockerfile,更改版本号为0.26.0,并编译

    $ docker build . -t frps:0.26
    $ docker images
    REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
    frps                 0.26                8a87cb91d4de        2 hours ago         21.1MB
    

    测试一下SSH代理服务,创建服务端配置文件

    mkdir -p frp/conf
    vim frp/conf/frps.ini
    

    frps.ini内容

    [common]
    bind_port = 7000
    

    运行一下frp服务端

    #清除先前运行的容器
    $ docker rm frp-server
    $ docker run --name frp-server -v /root/frp/conf:/conf -p 7000:7000 -p 6000:6000 frps:0.26
    2019/04/22 06:41:17 [I] [service.go:136] frps tcp listen on 0.0.0.0:7000
    2019/04/22 06:41:17 [I] [root.go:204] Start frps success
    2019/04/22 06:41:27 [I] [service.go:337] client login info: ip [110.87.98.82:61894] version [0.26.0] hostname [] os [linux] arch [386]
    2019/04/22 06:41:27 [I] [tcp.go:66] [e8783ecea2085e15] [ssh] tcp proxy listen port [6000]
    2019/04/22 06:41:27 [I] [control.go:398] [e8783ecea2085e15] new proxy [ssh] success
    2019/04/22 06:41:41 [I] [proxy.go:82] [e8783ecea2085e15] [ssh] get a new work connection: [110.*.*.*:61894]
    

    这里映射了2个端口,端口7000是frp服务端监听的端口,以便客户端能够连接上;端口6000是需要服务端监听这个端口,以便提供反向代理服务,比如SSH。如果使用的是腾讯云,相应的端口需要在安全组放行。
    客户端直接下对应的包,里面有配置试例。创建本地配置文件frpc.ini如下

    [common]
    server_addr = 123.*.*.*
    server_port = 7000
    
    [ssh]
    type = tcp
    local_ip = 127.0.0.1
    local_port = 22
    remote_port = 6000
    

    这个配置即告诉服务端,将服务端的6000端口转发到本地的22端口。本地运行

    $ ./frpc -c ./frpc.ini.ssh 
    2019/04/22 06:41:27 [I] [service.go:221] login to server success, get run id [e8783ecea2085e15], server udp port [0]
    2019/04/22 06:41:27 [I] [proxy_manager.go:137] [e8783ecea2085e15] proxy added: [ssh]
    2019/04/22 06:41:27 [I] [control.go:144] [ssh] start proxy success
    

    然后在服务端连接客户端。这里连接的是服务端的6000端口,会被转发给远程(局域网内)主机

    [rth@centos72]$ ssh -oPort=6000 vagrant@123.*.*.*
    The authenticity of host '[123.*.*.*]:6000 ([123.*.*.*]:6000)' can't be established.
    RSA key fingerprint is SHA256:NhBO/PDL***********************.
    RSA key fingerprint is MD5:20:70:e2:*:*:*:*:*:*:*:*:*:*:*:*:*.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '[123.*.*.*]:6000' (RSA) to the list of known hosts.
    vagrant@123.*.*.*'s password:
    Last login: Mon Apr 22 06:39:07 2019 from 10.0.2.2
    [vagrant@centos64 ~]$ exit
    logout
    Connection to 123.*.*.* closed.
    

    Frp转发http服务很简单。在conf目录下创建配置frps.ini监听本机来自8080端口的HTTP请求

    [common]
    bind_port = 7000
    vhost_http_port = 8080
    
    [root@VM_1_218_centos frp]# docker run --name frp-server -v /root/frp/conf:/conf -p 7000:7000 -p 8080:8080 frps:0.26
    2019/05/06 07:26:28 [I] [service.go:136] frps tcp listen on 0.0.0.0:7000
    2019/05/06 07:26:28 [I] [service.go:178] http service listen on 0.0.0.0:8080
    2019/05/06 07:26:28 [I] [root.go:204] Start frps success
    2019/05/06 07:26:51 [I] [service.go:337] client login info: ip [123.*.*.*:56758] version [0.26.0] hostname [] os [linux] arch [386]
    2019/05/06 07:26:51 [I] [http.go:72] [19f60a30aa924343] [web] http proxy listen for host [test.example.com] location []
    2019/05/06 07:26:51 [I] [control.go:398] [19f60a30aa924343] new proxy [web] success
    2019/05/06 07:27:05 [I] [proxy.go:82] [19f60a30aa924343] [web] get a new work connection: [123.*.*.*:56758]
    2019/05/06 07:27:05 [I] [proxy.go:82] [19f60a30aa924343] [web] get a new work connection: [123.*.*.*:56758]
    2019/05/06 07:27:06 [I] [proxy.go:82] [19f60a30aa924343] [web] get a new work connection: [123.*.*.*:56758]
    

    然后配置Nginx转发请求

    $ vim /etc/nginx/conf.d/100-dev.example.conf
    
        location / {
          proxy_set_header        Host $host;
          proxy_set_header        X-Real-IP $remote_addr;
          proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header        X-Forwarded-Proto $scheme;
    
          proxy_pass          http://127.0.0.1:8080/;
          proxy_redirect off;
    
        }
    

    创建本地web传教客户端配置frpc.ini,将来自服务器dev.example.com:8080端口的HTTP请求转发至本地80端口

    [common]
    server_addr = 123.*.*.*
    server_port = 7000
    
    [web]
    type = http
    local_port = 80
    custom_domains = dev.example.com
    

    运行本地客户端

    [root@vagrant-centos64 frp]# ./frpc -c ./frpc.ini
    2019/05/06 07:26:51 [I] [service.go:221] login to server success, get run id [19f60a30aa924343], server udp port [0]
    2019/05/06 07:26:51 [I] [proxy_manager.go:137] [19f60a30aa924343] proxy added: [web]
    2019/05/06 07:26:51 [I] [control.go:144] [web] start proxy success
    2019/05/06 07:27:37 [E] [control.go:127] work connection closed, EOF
    2019/05/06 07:27:37 [I] [control.go:228] control writer is closing
    2019/05/06 07:27:37 [I] [service.go:127] try to reconnect to server...
    

    访问dev.example.com既可以看到本地web服务器页面。Frp还可以代理其他请求,也有在它基础上二次加工提供基于token认证的转发服务。
    Ngrok 2.0以后不再开源,只能使用1.3版本的搭建。这里使用docker-ngrok来构建。Ngrok构建需要SSL证书,复制刚才生成的letsencypt证书并更改server.sh

    $ git clone https://github.com/hteen/docker-ngrok
    $ cp /etc/letsencrypt/live/example.com/fullchain.pem myfiles/base.pem
    $ cp /etc/letsencrypt/live/example.com/fullchain.pem myfiles/fullchain.pem
    $ cp /etc/letsencrypt/live/example.com/privkey.pem myfiles/privkey.pem
    
    $ vim server.sh
    #!/bin/sh
    set -e
    
    if [ "${DOMAIN}" == "**None**" ]; then
        echo "Please set DOMAIN"
        exit 1
    fi
    
    if [ ! -f "${MY_FILES}/bin/ngrokd" ]; then
        echo "ngrokd is not build,will be build it now..."
        /bin/sh /build.sh
    fi
    
    
    ${MY_FILES}/bin/ngrokd -tlsKey=${MY_FILES}/privkey.pem -tlsCrt=${MY_FILES}/fullchain.pem -domain="${DOMAIN}" -httpAddr=${HTTP_ADDR} -httpsAddr=${HTTPS_ADDR} -tunnelAddr=${TUNNEL_ADDR}
    

    构建Ngrok镜像

    [root@VM_1_218_centos docker-ngrok]# docker build -t ngrok:1.3 .
    [root@VM_1_218_centos docker-ngrok]# docker images
    REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
    ngrok                1.3                 dc70190d6377        13 seconds ago      260MB
    frps                 0.26                8a87cb91d4de        2 hours ago         21.1MB
    alpine               latest              cdf98d1859c1        12 days ago         5.53MB
    

    然后交叉编译生成Linux/Mac/Windows平台的客户端

    $ rm -rf assets/client/tls/ngrokroot.crt
    $ cp /etc/letsencrypt/live/example.com/chain.pem assets/client/tls/ngrokroot.crt
    $ rm -rf assets/server/tls/snakeoil.crt
    $ cp /etc/letsencrypt/live/example.com/cert.pem assets/server/tls/snakeoil.crt
    $ rm -rf assets/server/tls/snakeoil.key
    $ cp /etc/letsencrypt/live/example.com/privkey.pem assets/server/tls/snakeoil.key
    $ GOOS=linux GOARCH=amd64 make release-client
    $ GOOS=windows GOARCH=amd64 make release-client
    $ GOOS=darwin GOARCH=amd64 make release-client
    

    在服务器上运行Ngrok服务,将8090端口请求转发给容器的80端口,并且映射容器的4443端口到服务器的7000端口,以便客户端连接

    [root@VM_1_218_centos docker-ngrok]# docker run --name ngrok -e DOMAIN='example.com' -p 8090:80 -p 8091:443 -p 7000:4443 -v /root/docker-ngrok/myfiles:/myfiles ngrok:1.3 /bin/sh /server.sh
    [09:18:21 UTC 2019/05/07] [INFO] (ngrok/log.(*PrefixLogger).Info:83) [registry] [tun] No affinity cache specified
    [09:18:21 UTC 2019/05/07] [INFO] (ngrok/log.Info:112) Listening for public http connections on [::]:80
    [09:18:21 UTC 2019/05/07] [INFO] (ngrok/log.Info:112) Listening for public https connections on [::]:443
    [09:18:21 UTC 2019/05/07] [INFO] (ngrok/log.Info:112) Listening for control and proxy connections on [::]:4443
    [09:18:21 UTC 2019/05/07] [INFO] (ngrok/log.(*PrefixLogger).Info:83) [metrics] Reporting every 30 seconds
    [09:18:27 UTC 2019/05/07] [INFO] (ngrok/log.(*PrefixLogger).Info:83) [tun:18e8cd42] New connection from 123.*.*.*:50529
    [09:18:27 UTC 2019/05/07] [DEBG] (ngrok/log.(*PrefixLogger).Debug:79) [tun:18e8cd42] Waiting to read message
    [09:18:27 UTC 2019/05/07] [DEBG] (ngrok/log.(*PrefixLogger).Debug:79) [tun:18e8cd42] Reading message with length: 125
    [09:18:27 UTC 2019/05/07] [DEBG] (ngrok/log.(*PrefixLogger).Debug:79) [tun:18e8cd42] Read message {"Type":"Auth","Payload":{"Version":"2","MmVersion":"1.7","User":"","Password":"","OS":"linux","Arch":"amd64","ClientId":""}}
    [09:18:27 UTC 2019/05/07] [INFO] (ngrok/log.(*PrefixLogger).Info:83) [ctl:18e8cd42] Renamed connection tun:18e8cd42
    [09:18:27 UTC 2019/05/07] [INFO] (ngrok/log.(*PrefixLogger).Info:83) [registry] [ctl] Registered control with id 1957f20b9b3ce3b76c7d8fc8b16276ed
    [09:18:27 UTC 2019/05/07] [DEBG] (ngrok/log.(*PrefixLogger).Debug:79) [ctl:18e8cd42] [1957f20b9b3ce3b76c7d8fc8b16276ed] Writing message: {"Type":"AuthResp","Payload":{"Version":"2","MmVersion":"1.7","ClientId":"1957f20b9b3ce3b76c7d8fc8b16276ed","Error":""}}
    [09:18:27 UTC 2019/05/07] [DEBG] (ngrok/log.(*PrefixLogger).Debug:79) [ctl:18e8cd42] [1957f20b9b3ce3b76c7d8fc8b16276ed] Writing message: {"Type":"ReqProxy","Payload":{}}
    [09:18:27 UTC 2019/05/07] [DEBG] (ngrok/log.(*PrefixLogger).Debug:79) [ctl:18e8cd42] [1957f20b9b3ce3b76c7d8fc8b16276ed] Waiting to read message
    

    将刚才编译的客户端下载下来,创建grok.cfg,连接服务器的7000端口

    server_addr: "example.com:7000"
    trust_host_root_certs: false
    

    指定要监听的域名,及本地web端口

    ./ngrok -config=ngrok.cfg -subdomain=dev 9010
    
    ngrok                                                                                                                                                                                                                                                         (Ctrl+C to quit)
                                                                                                                                                                                                                                                                                  
    Tunnel Status                 online                                                                                                                                                                                                                                          
    Version                       1.7/1.7                                                                                                                                                                                                                                         
    Forwarding                    http://dev.flexkit.cn -> 127.0.0.1:9010                                                                                                                                                                                                         
    Forwarding                    https://dev.flexkit.cn -> 127.0.0.1:9010                                                                                                                                                                                                        
    Web Interface                 127.0.0.1:4040                                                                                                                                                                                                                                  
    # Conn                        2                                                                                                                                                                                                                                               
    Avg Conn Time                 46.84ms                                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                                  
    
    
    HTTP Requests                                                         
    -------------                                                         
                                                                          
    GET /teams                    200 OK                   
    

    请求dev.example.com即可以访问到本机9010端口的web服务。
    附:ZeroTier是一个软件定义网络(SDN)软件,可以免费组建私有网络,当然也可以用来转发服务器请求至本地。

    参考链接::
    CentOS7搭建ngrok服务器
    inconshreveable/ngrok
    hteen/ngrok
    搭建自己的 Ngrok 服务器, 并与 Nginx 并存
    使用Docker部署Ngrok实现内网穿透
    Laravel DDNS package,可代替花生壳之类的软件
    通过DNSPod API实现动态域名解析
    借助dnspod-api定时更新域名解析获取树莓派公网ip
    使用Let’s Encrypt生成通配符SSL证书
    Letsencrypt使用DNSPOD验证自动更新证书
    在 OpenWrt 环境下使用 DnsPod 来实现动态域名解析
    利用ssh反向代理以及autossh实现从外网连接内网服务器
    How To Configure Nginx with SSL as a Reverse Proxy for Jenkins

    使用SkPy创建Skype群组会话

    每一次产品发布或故障支持,都需要将相关人员拉到同一个Skype群聊会话,以便讨论、测试、支持。如果每次都需要手动做这些动作是比较累;如果复用已有会话,又会影响本次无关人员。于是有了这样一个需求:在Web后台定义相关组员及关联关系,在Web前台点击即可以创建或加入相关会话。这要求提供一个HTTP的接口,接收会话人员及主题,创建聊天室。搜了一圈,发现SkPy这个库最简单,支持创建会话,发送/接收消息,事件监听等等,其他的库要么功能太简单不满足,要么需要安装Skype客户端,要么不支持最新(live)注册的Skype用户,决定使用这个来开发。由于只是一个简单的HTTP接口,决定使用web.py
    首先安装SkPy和web.py,注意如果是CentOS 6,Python certifi版本只能是2015.04.28,否则会报错

    sudo pip install SkPy
    sudo pip install web.py
    #you need to do follow steps on Centos 6.x, to make requests works
    sudo pip uninstall -y certifi
    sudo pip install certifi==2015.04.28
    

    web.py只要单个文件就可以工作了,创建chat.py如下

    import web
    from skpy import Skype
    from skpy import SkypeAuthException
    import logging
    import hashlib
    import os.path
    import io
     
     
    urls = (
        '/', 'index',
        '/chat', 'chat'
    )
    '''
    try:
        import http.client as http_client
    except ImportError:
        # Python 2
        import httplib as http_client
    http_client.HTTPConnection.debuglevel = 1
    logging.basicConfig()
    logging.getLogger().setLevel(logging.DEBUG)
    requests_log = logging.getLogger("requests.packages.urllib3")
    requests_log.setLevel(logging.DEBUG)
    requests_log.propagate = True
    '''
     
     
    class SkypeService:
        def __init__(self):
            self.username = '<skype account>'
            self.password = '<skype password>'
            self.token_file="/tmp/tokens-app"
            self.skype = Skype(connect=False)
            self.skype.conn.setTokenFile(self.getTokenFile())
     
        def getTokenFile(self):
            if not os.path.isfile(self.token_file):
                with io.open(self.token_file, 'a') as file:
                    file.close()
            return self.token_file
     
        def connect(self):
            try:
                self.skype.conn.readToken()
            except SkypeAuthException:
                self.skype.conn.setUserPwd(self.username, self.password)
                self.skype.conn.getSkypeToken()
     
        def creatChatRoom(self, member, topic):
            ch = self.skype.chats.create(members=member)
            ch.setTopic(topic)
            return ch
     
        def getShardLink(self, channel):
            return channel.joinUrl
     
        def createAndGetSharedLink(self, member, topic):
            self.connect()
            ch = self.creatChatRoom(member, topic)
            # ch.sendMsg("welcome")
            # return {"id": ch.id, "url": self.getShardLink(ch)}
            return self.getShardLink(ch)
    
        def getConversationIdByUrl(self, url):
            id = self.skype.chats.urlToIds(url)["Resource"]
            return id
        
        def getChatroomByUrl(self, url):
            id = self.getConversationIdByUrl(url)
            ch = getChatroomByConversationId(id)
            return ch
    
        def getChatroomByConversationId(self, id):
            ch = self.skype.chats.chat(id)
            return ch
    
        def sendMessageByConversationId(self, id, message):
            ch = self.getChatroomByConversationId(id)
            return ch.sendMsg("Hello world!")
    
        def getMessagesByConversationId(self, id):
            ch = self.getChatroomByConversationId(id)
            return ch.getMsgs()
     
     
    class Storage:
        def __init__(self):
            self.cache_path = '/tmp/'
     
        def set(self, key, value):
            cache_file = self.cache_path + key
            try:
                with io.open(cache_file, 'w') as file:
                    file.write(value)
            except:
                raise Exception('file: {0} write failure'.format(cache_file))
            return True
     
        def get(self, key):
            cache_file = self.cache_path + key
            try:
                with io.open(cache_file) as file:
                    value = file.read()
            except:
                raise Exception('file: {0} not exists'.format(cache_file))
            return value
     
     
    class index:
        def GET(self):
            return "Hello, world!"
     
     
    class chat:
        def GET(self):
            url = web.ctx.home + web.ctx.path + web.ctx.query
            key = hashlib.md5(url).hexdigest()
            storage = Storage()
            try:
                join_url = storage.get(key)
            except:
                param = web.input()
                users = param.user
                member = tuple(users.split(','))
                topic = param.topic
                sk = SkypeService()
                join_url = sk.createAndGetSharedLink(member, topic)
                storage.set(key, join_url)
     
            return join_url
     
     
    if __name__ == "__main__":
        app = web.application(urls, globals())
        app.run()
    

    然后运行chat.py,默认监听8080端口

    python chat.py [port]
    

    在浏览器访问,

    http://127.0.0.1:8080/chat?user=user1,user2&topic=19.5Release
    

    即可以创建一个聊天会话,并且返回join url,点击这个URL会尝试打开Skype应用。注意这个会话默认是开放的,允许任何人加入

    https://join.skype.com/LRRUuan7kNH3
    

    去掉logging注释,可以看到API调用的过程,作者也作了详细的协议文档,可以看出登录流程相当复杂,也可以根据这个开放出其他语言的SDK。
    注意这个bot运行的是个人账号,使用的是与web.skype.com相同的HTTP API,最好是在Skype 开发者平台上注册,官方也提供了NodeJS的SDK
    之前许多QQ机器人使用都是Web QQ的接口,目前已关闭。同时官方的API,发现并没有创建任意群聊的API。对比国外软件,国内的API真的是不开放。国外公司甚至有专门的API Platform团队,负责API开发开放,以及与第三方平台的集成。

    参考连接:
    SkPy

    Django + Celery处理异步任务/计划任务

    在Web开发的过程中,难免有一些操作是耗时任务,有时候我们不想阻塞Web进程,希望提交到后台处理,有结果了再通知前端;有时候用户不需要实时结果,可以异步处理的,比如发送邮件通知;有时候我们希望定时执行某些任务,复用已有的一些Web代码。对于第一种情况,可以是RPC或调用其他接口处理;第二种情况则是放入队列,由后台进程异步处理;第三种情况可以是在定时任务内处理,也可以是触发消息放入队列,由后台进程任务处理,同第二种情况。对于第一种情况,也可以是放入队列由后台进程处理,Web前端,定时轮询队列/接口是否有结果。这样三种情况都可以统一为一种情况,即不同来源事件(用户/定时器)触发消息,放入队列由后台任务进程异步处理,异步查询结果。
    之前的一个Python项目即类似。不管代码是PHP还是Python思想都是类似,只是不同语言有不同工具/优势来处理。Web前端展示统计报表,后台进程定时查询Impala,分析统计数据;Web前端也可以触发事件比如统计、发送邮件报告等,后台进程监听并处理。Web前端选择Django,是Python写的一个MVT框架,提供了登录认证、管理后台模块,还有命令行助手工具可以生成项目,易于快速搭建并扩展Web站点。后台任务处理选择Celery,是Python写的一个分布式的任务处理平台,支持各种消息队列,比如RabbbitMQ、Redis,并提供Flower监控工具。Django本身是一个Web框架,生产环境最好使用性能更好的HTTP服务器,Gunicorn是Python写的一个WSGI HTTP服务器,用来监听HTTP请求,调用Django。Gunicorn前端最好使用Nginx来做代理,分发请求,结构如下:

    安装Python 3和Django,并生成项目框架代码。注意:由于机器上面本来就有Pyton 2.7,所以要重命名下。

    yum -y update
    yum -y install yum-utils
    yum -y groupinstall development
    yum install zlib-devel bzip2-devel openssl-devel ncurese-devel expat-devel gdbm-devel readline-devel sqlite-devel libffi-devel
    wget https://www.python.org/ftp/python/3.6.2/Python-3.6.2.tar.xz
    tar Jxvf Python-3.6.2.tar.xz
    mv Python-3.6.2 /opt/Python-3.6.2
    cd /opt/Python-3.6.2
    ./configure --enable-shared --prefix=/usr/local/python3
    make && make install
    vim /etc/ld.so.conf
    /usr/local/python3/lib/
    /sbin/ldconfig -v
    ln -s /usr/local/python3/bin/python3.6 /usr/bin/python3.6
    ln -s /usr/bin/python3.6 /usr/bin/python3
    #ln -s /usr/local/python3/bin/python3.6 /usr/bin/python
    ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3
    python3 -V
    pip3 -V
    pip3 install Django
    python3 -m django --version
    ln -s /usr/local/python3/bin/django-admin /usr/bin/django-admin/
    cd /home/rc
    django-admin startproject qrd
    cd qrd
    python3 manage.py startapp master
    vim qrd/settings.py
    

    编辑settings.py,监听相应域名、IP的请求

    ALLOWED_HOSTS = ['dev.example.com','localhost','127.0.0.1','10.1.*.*]
    INSTALLED_APPS = [
        'django.contrib.admin',
        'django.contrib.auth',
        'master',
        'django.contrib.contenttypes',
        'django.contrib.sessions',
        'django.contrib.messages',
        'django.contrib.staticfiles',
    ]
    

    测试一下

    python3 manage.py runserver 0:80
    

    然后安装Gunicorn

    $ pip3 install gunicorn
    $ pip3 install greenlet
    $ pip3 install ConfigParser
    $ ln -s /usr/local/python3/bin/gunicorn /usr/bin/gunicorn
    

    测试运行

    $ cd ~/qrd/qrd
    $ gunicorn -w 4 qrd.wsgi --bind  unix:/tmp/gunicorn.sock
    $ sudo vim /etc/systemd/system/gunicorn.service
    

    创建Gunicorn服务

    [Unit]
    Description=gunicorn daemon
    After=network.target
    [Service]
    User=root
    Group=root
    WorkingDirectory=/home/rc/qrd/qrd
    ExecStart=/usr/bin/gunicorn --access-logfile - --workers 3 --bind unix:/tmp/gunicorn.sock qrd.wsgi
    [Install]
    WantedBy=multi-user.target
    

    开机启动

    systemctl start gunicorn
    systemctl enable gunicorn
    #systemctl stop gunicorn
    

    接下来安装Nginx

    yum install nginx
    vim /etc/nginx/conf.d/default.conf
    

    配置Nginx与Gunicorn通过UNIX Sockect通信

    server {
        listen       80;
        server_name  localhost;
        #charset koi8-r;
        #access_log  /var/log/nginx/host.access.log  main;
        #location / {
        #   root   /usr/share/nginx/html;
        #   index  index.html index.htm;
        #
        location / {
            proxy_pass http://unix:/tmp/gunicorn.sock;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
    

    启动Nginx

    systemctl start nginx
    systemctl enable nginx
    

    安装MySQL及RabbitMQ

    yum install mysql-community-devel
    pip3 install mysqlclient
    
    
    yum install rabbitmq-server
    rabbitmq-server -detached
    rabbitmqctl status
    

    创建RabbitMQ vhost

    $ rabbitmqctl add_user qrdweb <password>
    Creating user "qrdweb" ...
    ...done.
    $ rabbitmqctl add_vhost qrdweb
    Creating vhost "qrdweb" ...
    ...done.
    $ rabbitmqctl set_user_tags qrdweb management
    Setting tags for user "qrdweb" to [management] ...
    ...done.
    $ rabbitmqctl set_permissions -p qrdweb qrdweb ".*" ".*" ".*"
    Setting permissions for user "qrdweb" in vhost "qrdweb" ...
    ...done.
    $ netstat -apn | grep rabbitmq
    $ rabbitmqctl status
    

    安装Celery

    pip3 install celery
    ln -s /usr/local/python3/bin/celery /usr/bin/celery
    

    测试Celery的异步任务worker及计划任务schedule

    $ cd /home/qrd/qrd/
    $ ls
    dashboard  db.sqlite3  manage.py  master  qrd  static
    $ celery -A qrd worker -l info
    /usr/local/python3/lib/python3.6/site-packages/celery/platforms.py:795: RuntimeWarning: You're running the worker with superuser privileges: this is
    absolutely not recommended!
    Please specify a different user using the -u option.
    User information: uid=0 euid=0 gid=0 egid=0
      uid=uid, euid=euid, gid=gid, egid=egid,
     -------------- celery@localhost.localdomain v4.1.0 (latentcall)
    ---- **** -----
    --- * ***  * -- Linux-3.10.0-693.2.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core 2017-09-22 08:19:39
    -- * - **** ---
    - ** ---------- [config]
    - ** ---------- .> app:         qrd:0x7fda62e705c0
    - ** ---------- .> transport:   amqp://qrdweb:**@localhost:5672/qrdweb
    - ** ---------- .> results:     disabled://
    - *** --- * --- .> concurrency: 1 (prefork)
    -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
    --- ***** -----
     -------------- [queues]
                    .> celery           exchange=celery(direct) key=celery
     
    [tasks]
      . dashboard.tasks.debug_task
    [2017-09-22 08:19:39,769: INFO/MainProcess] Connected to amqp://qrdweb:**@127.0.0.1:5672/qrdweb
    [2017-09-22 08:19:39,781: INFO/MainProcess] mingle: searching for neighbors
    [2017-09-22 08:19:40,811: INFO/MainProcess] mingle: all alone
    [2017-09-22 08:19:40,860: WARNING/MainProcess] /usr/local/python3/lib/python3.6/site-packages/celery/fixups/django.py:202: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
      warnings.warn('Using settings.DEBUG leads to a memory leak, never '
    [2017-09-22 08:19:40,860: INFO/MainProcess] celery@localhost.localdomain ready.
    [2017-09-22 08:20:55,023: INFO/MainProcess] Received task: dashboard.tasks.debug_task[71e6c0e1-92e1-494e-b5e9-163eeb7bd24e]
    [2017-09-22 08:20:55,027: INFO/ForkPoolWorker-1] Task dashboard.tasks.debug_task[71e6c0e1-92e1-494e-b5e9-163eeb7bd24e] succeeded in 0.001253978000022471s: 'debug_task'
    [2017-09-22 08:22:21,179: INFO/MainProcess] Received task: dashboard.tasks.debug_task[b81fe9a0-1725-4702-ba0e-13196c9b5977]
    [2017-09-22 08:22:21,180: INFO/ForkPoolWorker-1] Task dashboard.tasks.debug_task[b81fe9a0-1725-4702-ba0e-13196c9b5977] succeeded in 0.00018433199147693813s: 'debug_task'
     
     
    $ celery -A qrd beat -l info -s /tmp/celerybeat-schedule
    celery beat v4.1.0 (latentcall) is starting.
    __    -    ... __   -        _
    LocalTime -> 2017-09-24 04:20:37
    Configuration ->
        . broker -> amqp://qrdweb:**@localhost:5672/qrdweb
        . loader -> celery.loaders.app.AppLoader
        . scheduler -> celery.beat.PersistentScheduler
        . db -> /tmp/celerybeat-schedule
        . logfile -> [stderr]@%INFO
        . maxinterval -> 5.00 minutes (300s)
    [2017-09-24 04:20:37,823: INFO/MainProcess] beat: Starting...
    [2017-09-24 04:20:37,866: INFO/MainProcess] Scheduler: Sending due task add every 10 (qrd.celery.test)
    [2017-09-24 04:20:47,856: INFO/MainProcess] Scheduler: Sending due task add every 10 (qrd.celery.test)
    [2017-09-24 04:20:57,858: INFO/MainProcess] Scheduler: Sending due task add every 10 (qrd.celery.test)
    [2017-09-24 04:20:57,861: INFO/MainProcess] Scheduler: Sending due task qrd.celery.test('world') (qrd.celery.test)
    [2017-09-24 04:21:07,858: INFO/MainProcess] Scheduler: Sending due task add every 10 (qrd.celery.test)
    [2017-09-24 04:21:17,859: INFO/MainProcess] Scheduler: Sending due task add every 10 (qrd.celery.test)
    

    运行成功,可以使用Supervisord来守护监控Celery的运行,参考这里
    Django项目结果如下

    先配置Celery使用RabbitMQ作为Broker,使用Django DB来保存调用结果settings.py

    import os
    from configparser import RawConfigParser
    
    #https://code.djangoproject.com/wiki/SplitSettings
    config = RawConfigParser()
    config.read('/home/qrd/setting/settings.ini')
    
    STATIC_URL = '/static/'
    STATIC_ROOT = os.path.join(BASE_DIR, 'static')
    
    CELERY_BROKER_URL = 'amqp://usr:pwd@localhost:5672/qrdweb'
    CELERY_RESULT_BACKEND = 'django-db'
    

    然后在Django项目下创建celery.py文件

    from __future__ import absolute_import, unicode_literals
    import os
    from celery import Celery
    from celery.schedules import crontab
    
    # set the default Django settings module for the 'celery' program.
    os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'qrd.settings')
    
    app = Celery('qrd')
    
    # Using a string here means the worker doesn't have to serialize
    # the configuration object to child processes.
    # - namespace='CELERY' means all celery-related configuration keys
    #   should have a `CELERY_` prefix.
    app.config_from_object('django.conf:settings', namespace='CELERY')
    
    # Load task modules from all registered Django app configs.
    app.autodiscover_tasks()
    
    
    app.conf.beat_schedule = {
        'hue-tasks-debug_task': {
            'task': 'hue.tasks.debug_task',
            'schedule': 10.0,
            'args': ()
        },
    }
    

    并且在__init__.py引入Celery即可集成

    from __future__ import absolute_import, unicode_literals
    
    # This will make sure the app is always imported when
    # Django starts so that shared_task will use this app.
    from .celery import app as celery_app
    
    __all__ = ['celery_app']
    

    Django的异步任务只能定义在各个app的task.py文件里,比如qrd.hue.tasks定义了一个定时任务

    from celery import task
    
    @task
    def debug_task():
        #print(arg)
        return 'debug_task'
    
    
    

    也可以在其模块里面调用

    from tasks import debug_task
    
    def save(data):
        debug_task.delay()
    

    顺便推荐一个Bootstrap管理后台模板:gentelella

    参考链接:
    异步任务神器 Celery
    Django配置celery执行异步任务和定时任务
    淺談 Gunicorn 各個 worker type 適合的情境