标签归档:mysql

Jupyter Notebook

对于编程初学者,如果有一个开箱即用的环境,比如web页面,就可以进行编程交互,那是极友好。有时候我们想在远程服务器上执行一些脚本,输出一些结果,比如科学计算;有时候又想在服务器上执行一些命令但又不能直接登录服务器,如果能够在web界面上操作或作为跳板机,那也是极友好的。Jupyter Notebook是基于IPython的一个基于web交互执行的在线环境,支持Python,也支持其他编程语言,比如Julia和R。所创建Notebook文档可以自动保存执行过的代码、结果,方便进行回放。
Jupyter Notebok的安装很方便,可以使用Anaconda来安装,或者手动安装。Python3下手动安装,

pip3 install jupyter
export PATH=$PATH:/usr/local/python3/bin

查看一下

[root@localhost local]# pip3 show jupyter
Name: jupyter
Version: 1.0.0
Summary: Jupyter metapackage. Install all the Jupyter components in one go.
Home-page: http://jupyter.org
Author: Jupyter Development Team
Author-email: jupyter@googlegroups.org
License: BSD
Location: /usr/local/python3/lib/python3.7/site-packages
Requires: jupyter-console, notebook, ipywidgets, nbconvert, qtconsole, ipykernel
Required-by: 

如果直接运行jupyter notebook,那么会生成一个本地可以访问的带token的url,每次都不一样,不是很方便。设置密码,以便登录

[root@localhost opt]# jupyter notebook password
Enter password: 
Verify password: 
[NotebookPasswordApp] Wrote hashed password to /root/.jupyter/jupyter_notebook_config.json
[root@localhost bin]# cat /root/.jupyter/jupyter_notebook_config.json 
{
  "NotebookApp": {
    "password": "sha1:e04153005102:961b12eef91987a06b497f915fc3f18c62d8f714"
  }

由于是在虚拟机里面,我们并不需要Jupyter自动打开浏览器,但需要监听来自任意IP的请求,指定端口9030。这里使用root用户运行Jupyter,默认是不允许的:

[root@localhost opt]# jupyter notebook --no-browser --allow-root --ip 0.0.0.0 --port 9030
[I 02:13:44.320 NotebookApp] Serving notebooks from local directory: /opt
[I 02:13:44.320 NotebookApp] The Jupyter Notebook is running at:
[I 02:13:44.320 NotebookApp] http://(localhost.localdomain or 127.0.0.1):9030/
[I 02:13:44.320 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[I 02:13:59.664 NotebookApp] 302 GET / (192.168.33.1) 1.22ms
[I 02:14:23.597 NotebookApp] Kernel started: 7ad63717-7a65-4dec-9d5a-9af654c28f75
[I 02:14:25.204 NotebookApp] Adapting to protocol v5.1 for kernel 7ad63717-7a65-4dec-9d5a-9af654c28f75
[I 02:14:37.350 NotebookApp] Starting buffering for 7ad63717-7a65-4dec-9d5a-9af654c28f75:ea68853b742c40f8bcf8745529ea95de
[I 02:14:43.735 NotebookApp] Kernel started: 5b569c8d-6936-4bd2-9674-0317c46948f6
[I 02:14:44.124 NotebookApp] Adapting to protocol v5.0 for kernel 5b569c8d-6936-4bd2-9674-0317c46948f6
[2019-06-03 02:14:43] kernel.DEBUG: Connection settings {"processId":6751,"connSettings":{"shell_port":39990,"iopub_port":48184,"stdin_port":40113,"control_port":43426,"hb_port":49075,"ip":"127.0.0.1","key":"d5f89bba-890ecf15e6b20718411170ad","transport":"tcp","signature_scheme":"hmac-sha256","kernel_name":"jupyter-php"},"connUris":{"stdin":"tcp://127.0.0.1:40113","control":"tcp://127.0.0.1:43426","hb":"tcp://127.0.0.1:49075","shell":"tcp://127.0.0.1:39990","iopub":"tcp://127.0.0.1:48184"}} []
[2019-06-03 02:14:44] KernelCore.DEBUG: Initialized sockets {"processId":6751} []

然后打开浏览器,访问http://192.168.33.70:9030,输入账号密码,就可以在web里面运行Python了

Jupyter默认带了SQL扩展,使用ipython-sql来执行,只需要安装对应的驱动,这里使用PyMySQL

python3 -m pip install PyMySQL

然后在Web里面执行就可以了

Jupyter还有其他扩展,参考这里
除了可以执行Python和SQL,Jupyter Notebook也可以支持其他语言,在这里列出了。通常执行方式是通过Bash执行,或者通过ZeroMQ来通信,参考这里实现。一个Jupyter的kernal将需要监听以下几个socket:

  • Shell:执行命令
  • IOPub:推送执行结果
  • Stdin:接收输入
  • Control:接收控制命令,比如关闭、终端
  • Heartbeat:心跳检测
  • 这个思路也可以用来做IOT设备的远程监控,交互执行。
    这里安装一下PHP 7这个kernel,作者甚至还提供了installer,但是首先要安装ZeroMQ以便与Jupyter服务通信

    yum install php-pecl-zmq
    wget https://litipk.github.io/Jupyter-PHP-Installer/dist/jupyter-php-installer.phar
    ./jupyter-php-installer.phar install
    

    查看安装文件

    [root@localhost opt]# ls -la /usr/local/share/jupyter/kernels/
    total 0
    drwxr-xr-x. 4 root root 34 May 10 06:10 .
    drwxr-xr-x. 3 root root 20 May  9 07:30 ..
    drwxr-xr-x. 2 root root 24 May  9 07:30 jupyter-php
    drwxr-xr-x. 2 root root 40 May 10 06:10 lgo
    
    [root@localhost opt]# cat /usr/local/share/jupyter/kernels/jupyter-php/kernel.json 
    {"argv":["php","\/opt\/jupyter-php\/pkgs\/vendor\/litipk\/jupyter-php\/src\/kernel.php","{connection_file}"],"display_name":"PHP","language":"php","env":{}}
    

    这个扩展使用了react/zmq来监听Jupyter请求,使用psysh来交互执行PHP代码。

    如果想要更改Jupyter的web模板,可以在以下目录找到

    [root@localhost vagrant]# ls -la /usr/local/python3/lib/python3.7/site-packages/notebook/templates
    total 92
    drwxr-xr-x.  2 root root  4096 May  9 06:33 .
    drwxr-xr-x. 19 root root  4096 May  9 06:33 ..
    -rw-r--r--.  1 root root   147 May  9 06:33 404.html
    -rw-r--r--.  1 root root   499 May  9 06:33 browser-open.html
    -rw-r--r--.  1 root root  4258 May  9 06:33 edit.html
    -rw-r--r--.  1 root root   856 May  9 06:33 error.html
    -rw-r--r--.  1 root root  4256 May  9 06:33 login.html
    -rw-r--r--.  1 root root  1179 May  9 06:33 logout.html
    -rw-r--r--.  1 root root 23162 May  9 06:33 notebook.html
    -rw-r--r--.  1 root root  6559 May  9 06:33 page.html
    -rw-r--r--.  1 root root  1089 May  9 06:33 terminal.html
    -rw-r--r--.  1 root root 12130 May  9 06:33 tree.html
    -rw-r--r--.  1 root root   544 May  9 06:33 view.html
    

    Jupyter Notebook Web前端采用WebSocket与服务器交互,服务器接收消息并转发给对应的kernel执行或控制,并将结果推送给前端。Jupyter Notebook也可以直接打开Terminal,在远程服务器上执行命令。注意这里的用户就是刚才运行jupyter的用户

    许多web terminal也都是采用WebSocket来做交互,比如xterm.jswebtty
    Juypter Notebook适合单用户(单机)使用,如果提供多用户使用(比如教学),可以使用Jupyter Hub,可以使用docker快捷部署。

    参考链接:
    Jupyter Notebook Extensions
    Jupyter – How do I decide which packages I need?
    PsySH——PHP交互式控制台
    Jupyter项目
    WebSocket 教程

    使用Docker Compose管理Docker容器

    上一篇创建了一个PHP的运行环境,现在需要一个MySQL服务,直接运行:

    root@thinkpad:~# docker run --name rocket-mysql -v /home/rocketfish/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root -d mysql:5.7
    

    就自动下载并得到一个mysql版本为5.7的服务镜像。
    这时候,要运行一个Web服务我们需要运行两次的docker run才可以,如果还有更多的Web容器或其他的服务容器呢?
    Docker官方提倡一个容器仅提供一个服务,多个服务/容器可以使用docker-compose来管理。
    docker-compose本身是一个Python写的工具,可以直接通过pip安装:

    root@thinkpad:~# sudo pip install --upgrade pip
    

    如果你本地并没有Python环境,也可以采用docker-compose的docker镜像来运行:

    root@thinkpad:/home/compose-web# curl -L https://github.com/docker/compose/releases/download/1.8.0/run.sh > /usr/local/bin/docker-compose
    root@thinkpad:/home/compose-web# docker-compose --version
    #开始下载镜像,建议用第一种
    

    查看帮助

    root@thinkpad:~# docker-compose -h
    Define and run multi-container applications with Docker.
    
    Usage:
      docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
      docker-compose -h|--help
    
    Options:
      -f, --file FILE             Specify an alternate compose file (default: docker-compose.yml)
      -p, --project-name NAME     Specify an alternate project name (default: directory name)
      --verbose                   Show more output
      -v, --version               Print version and exit
      -H, --host HOST             Daemon socket to connect to
    
      --tls                       Use TLS; implied by --tlsverify
      --tlscacert CA_PATH         Trust certs signed only by this CA
      --tlscert CLIENT_CERT_PATH  Path to TLS certificate file
      --tlskey TLS_KEY_PATH       Path to TLS key file
      --tlsverify                 Use TLS and verify the remote
      --skip-hostname-check       Don't check the daemon's hostname against the name specified
                                  in the client certificate (for example if your docker host
                                  is an IP address)
    
    Commands:
      build              Build or rebuild services
      bundle             Generate a Docker bundle from the Compose file
      config             Validate and view the compose file
      create             Create services
      down               Stop and remove containers, networks, images, and volumes
      events             Receive real time events from containers
      exec               Execute a command in a running container
      help               Get help on a command
      kill               Kill containers
      logs               View output from containers
      pause              Pause services
      port               Print the public port for a port binding
      ps                 List containers
      pull               Pulls service images
      push               Push service images
      restart            Restart services
      rm                 Remove stopped containers
      run                Run a one-off command
      scale              Set number of containers for a service
      start              Start services
      stop               Stop services
      unpause            Unpause services
      up                 Create and start containers
      version            Show the Docker-Compose version information
    

    这些命令包括了创建/启动/停止/暂停/继续运行容器。
    首先要创建docker-compose.yml,这是一个YAML格式的文档

    mkdir docker
    cd docker
    mkdir web
    mkdir db
    vim docker-compose.yml
    

    内容如下:

    version: '2'
    services:
      db:
        image: mysql:5.7
        ports:
        - "3306:3306"
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: root
          MYSQL_DATABASE: web
          MYSQL_USER: root
          MYSQL_PASSWORD: rootweb
        volumes:
        - ./db/data:/var/lib/mysql
      web:
        depends_on:
          - db
        image: nginx-php-fpm:phalcon
        ports:
        - "80:80"
        restart: always
        environment:
          WEB_DB_HOST: db:3306
          WEB_DB_PASSWORD: root
        volumes:
        - ./web/html:/var/www/html/
        links:
        - db
    

    这里的environment定义了环境变量,比如host的MAC,就可以在docker容器的系统变量里面得到,各个编程语言都有获取系统环境变量的方法。这些变量也可以组织起来放在文件里面加载进去,参考这里
    这里的volumes定义了要映射进容器的文件或文件夹或者数据容器,可以参考这里。注意,多个容器共享同一个目录,会出现写冲突,比如MySQL,多个实例,数据目录需要分开。所以设计好你的程序,哪一部分是需要只读的(可以共享),哪一部分是需要写的;写的部分是否可以放到各自临时目录,或者其他公共服务里面。
    运行并查看

    root@thinkpad:/home/compose-web# docker-compose up -d
    Creating network "composeweb_default" with the default driver
    Creating composeweb_db_1
    Creating composeweb_web_1
    root@thinkpad:/home/compose-web# docker-compose ps
          Name                   Command             State              Ports            
    ------------------------------------------------------------------------------------
    composeweb_db_1    docker-entrypoint.sh mysqld   Up      0.0.0.0:3306->3306/tcp      
    composeweb_web_1   /start.sh                     Up      443/tcp, 0.0.0.0:80->80/tcp 
    #也可以使用原来的命令
    root@thinkpad:/home/compose-web# docker ps
    CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                         NAMES
    efbdaf257748        nginx-php-fpm:phalcon   "/start.sh"              13 seconds ago      Up 11 seconds       0.0.0.0:80->80/tcp, 443/tcp   composeweb_web_1
    a6935d20911e        mysql:5.7               "docker-entrypoint.sh"   14 seconds ago      Up 13 seconds       0.0.0.0:3306->3306/tcp        composeweb_db_1
    
    

    docker-compose up -d ,将会在后台启动并运行所有的容器。
    docker-compose仅仅提供了对多个docker服务的管理,仍然可以在这些容器上运行原来的docker命令。检查下web容器的IP:

    root@thinkpad:/home/compose-web# docker inspect composeweb_web_1 | grep IPAddress
                "SecondaryIPAddresses": null,
                "IPAddress": "",
                        "IPAddress": "172.18.0.3",
    

    然后访问:http://172.18.0.3/ 就可以看到web页面了。
    如果想要停止服务,可以用docker-compose stop:

    root@thinkpad:/home/compose-web# docker-compose stop db
    Stopping composeweb_db_1 ... done
    root@thinkpad:/home/compose-web# docker-compose stop web
    Stopping composeweb_web_1 ... done
    

    再次启动,得到相同名称的服务:

    root@thinkpad:/home/compose-web# docker-compose up -d
    Starting composeweb_db_1
    Starting composeweb_web_1
    

    各个容器的名称也可以在配置文件里面通过参数container_name来指定。
    docker-compose的配置还有很多其他的参数,可以参考这里。比如刚才我们通过docker inspect来查找容器IP,也可以配置成静态IP:

    version: '2'
    
    services:
      app:
        image: nginx-php-fpm:phalcon
        networks:
          app_net:
            ipv4_address: 172.18.0.10
            ipv6_address: 2001:3984:3989::10
    
    networks:
      app_net:
        driver: bridge
        driver_opts:
          com.docker.network.enable_ipv6: "true"
        ipam:
          driver: default
          config:
          - subnet: 172.18.0.0/24
            gateway: 172.18.0.1
          - subnet: 2001:3984:3989::/64
            gateway: 2001:3984:3989::1
    

    也可以自定义网络
    刚才我们的配置文件里面直接指定了镜像,也可以指定Dockerfile镜像构建,假如Dockerfile在web目录下面:

    build:
      context: ./web
    

    或者直接指明Dockerfile:

    build:
      context: .
      dockerfile: Dockerfile-alternate
    

    实际上,docker-compose的配置已经覆盖了Dockerfile的配置项,也可以直接用于构建多个docker服务,比如指定运行的命令

    command: [/bin/bash]
    

    这里的入口程序,如果不能持续运行的话,运行完成后,docker就会退出。所以如果你需要可以后台运行的docker容器,那么入口程序,就必须保持一个程序在前台运行,不退出,比如运行crontab的镜像,入口程序可以是:

    cron && bash
    

    cron这个程序是在后台运行的,如果不配合前台程序bash,容器执行玩cron就会退出了。其他运行crontab的容器,大都是强制一个程序在前台不退出,比如这个

    cron && tail -f /var/log/cron.log
    

    这里需要注意一下,通过docker设置的environment values在容器内的cli下面执行时取不到的,比如printenv,使用cron后台运行时取不到外部设置的变量,但是手动执行时又可以。这些外部环境变量只能在ENTRYPOINT或者CMD所执行的shell里面去设置,比如在CMD执行start.sh:

    printenv | grep -E "^HOST" > /root/env.conf && cron && bash
    

    Docker compose还可以跟Docker Swarm结合。

    参考链接:
    Install Docker Compose
    Quickstart: Docker Compose and WordPress
    YAML 模板文件
    Introduction to Docker Compose Tool for Multi-Container Applications
    Dockerfile基本结构
    How to Get Environment Variables Passed Through docker-compose to the Containers
    Access environment variables from crontab into a docker container
    How can I access Docker set Environment Variables From a Cron Job
    Dockerfile里指定执行命令用ENTRYPOING和用CMD有何不同?
    What is the difference between CMD and ENTRYPOINT in a Dockerfile?
    Docker difference between run, cmd, entrypoint commands

    MySql 慢日志分析

    最近老是碰上MySql报错:1203:User already has more than ‘max_user_connections’ active,之前都没出现过,感觉应该是慢查询导致的。向运维拷贝慢日志分析,慢日志开、启配置参考这里
    拷贝出来的日志很大,需要按故障时间点做一下切割,以便缩小排查范围。按照这里提供的cutlogbytime.pl脚本运行却报错

    [vagrant@centos64 mysql-log-filter-1.9]$ ./cutlogbytime.pl slow.log 1443103200 1443117600 > yestoday.log
    : command not foundline 1:
    : command not foundline 4:
    ./cutlogbytime.pl: line 5: use: command not found
    : command not foundline 5:
    ./cutlogbytime.pl: line 6: use: command not found
    : command not foundline 6:
    : command not foundline 7:
    '/cutlogbytime.pl: line 8: syntax error near unexpected token `{
    '/cutlogbytime.pl: line 8: `if (@ARGV<2){
    
    

    去掉顶行的空格后再运行,还是报错

    [vagrant@centos64 mysql-log-filter-1.9]$ ./cutlogbytime.pl slow.log 1443103200 1443117600 > today.log
    -bash: ./cutlogbytime.pl: /usr/bin/perl^M: bad interpreter: No such file or directory
    

    最后参考stackoverflow上面的答案更改运行方式为Perl(而不是shell),就可以了。

    [vagrant@centos64 mysql-log-filter-1.9]$ perl cutlogbytime.pl slow.log 1443103200 1443117600 > today.log
    

    利用mysqlslowdump(perl脚本)来分析日志,-s参数表示排序方式:r表示影响行数(Rows),t表示耗时(Time),c表示查询次数(Count)

    [vagrant@entos64 mysql-log-filter-1.9]$  perl mysqldumpslow.pl -s r -t 10 today4.log
    
    Reading mysql slow query log from today4.log
    Count: 1  Time=190.48s (190s)  Lock=0.00s (0s)  Rows=21829854.0 (21829854), xx[xxxx]@[192.168.10.139]
      SELECT /*!N SQL_NO_CACHE */ * FROM `errormessage`
    
    Count: 32791  Time=40.95s (1342865s)  Lock=0.05s (1512s)  Rows=1.0 (32791), xx[xxxx]@10hosts
      select  *  from connectinfo where  ID=N  and AppType=N  ORDER BY CreateDatetime DESC LIMIT N
    
    Count: 3  Time=3.71s (11s)  Lock=0.02s (0s)  Rows=300.0 (900), xx[xxxx]@2hosts
      select SeverName from errormessage where  ID='S'  and ServerType=N  and level=N  and MsgType <= N
    
    

    第一个语句返回行数21829854,查看具体慢日志,之后需要插入这张表的进程均处于等待状态。

    # Time: 150924  1:03:12
    # User@Host: xx[xxxx] @  [192.168.10.139]  Id: 1493761
    # Query_time: 190.479062  Lock_time: 0.000000 Rows_sent: 21829854  Rows_examined: 21829854
    SET timestamp=1443027792;
    SELECT /*!40001 SQL_NO_CACHE */ * FROM `errormessage`;
    # Time: 150924  1:03:14
    # User@Host: xx[xxxx] @  [192.168.10.168]  Id: 1498010
    # Query_time: 59.669817  Lock_time: 57.159403 Rows_sent: 0  Rows_examined: 0
    SET timestamp=1443027794;
    insert into errormessage (`ID`,`ServerType`,`MsgType`,`Level`,`dev`,`content`,`EventTime`,`SeverName`) values ( '1217', '3', '4', '4', '827', 'erc:start erc error,songid=46243,keymd5=ee1275b26762e85a7f00e9890bdc092e,ercmd5=abbc3ea9102dbd003b7aa0547dcbf6fa', '2015-09-23 21:49:27', '192.168.15.117');
    # User@Host: xx[xxxx] @  [192.168.10.205]  Id: 1494756
    # Query_time: 157.211158  Lock_time: 154.673647 Rows_sent: 0  Rows_examined: 0
    SET timestamp=1443027794;
    insert into errormessage (`ID`,`ServerType`,`MsgType`,`Level`,`dev`,`content`,`EventTime`,`SeverName`) values ( '865', '3', '1', '2', '106', '检测正常!', '2015-09-24 01:01:18', '192.168.0.33');
    # User@Host: xx[xxxx] @  [192.168.10.213]  Id: 1496479
    # Query_time: 100.733230  Lock_time: 98.210902 Rows_sent: 0  Rows_examined: 0
    SET timestamp=1443027794;
    insert into errormessage (`ID`,`ServerType`,`MsgType`,`Level`,`dev`,`content`,`EventTime`,`SeverName`) values ( '2472', '3', '2', '4', '809', 'videoseripnoconfig', '2015-09-24 01:02:26', '192.168.0.18');
    

    分析这几天的日志,发现故障时间点附近都是这个语句引起后面的SQL堵塞。原来是每天早上1点开始备份并同步全表数据,锁住了这个表导致后面的所有这个表的insert操作处于等待状态。mysqldump应该使用–single-transaction来避免锁表,类似下面这个

    mysqldump –uuser -p --skip-opt -q -R  --single-transaction --default-character-set=utf8 --master-data=2  --create-option --no-autocommit –S ${sock} -B ${DBName}  > backup.sql
    

    但是这样仍然是全表扫描进行备份,如果能够增量备份的话,影响要小很多,或者数据做冷热/新旧之分,定期将新(每天)/热数据转入旧(历史)/冷数据中。后面运维的解决方案是:升级数据库机器从虚拟机变为实体机,配置从机,并从从机进行备份同步。

    上面mysqlslowdump使用影响行数来排序,事实上用另外两个类型(时间,次数)分析结果是connectinfo比较频繁,一直以来都认为是这个表的操作引起的。这里还尝试了其他工具来分析,使用mysqlsla.pl进行分析,相关参数选项参考这里

    [vagrant@centos64 mysql-log-filter-1.9]$  perl mysqlsla.pl today.log
    Auto-detected logs as slow logs
    Report for slow logs: today4.log
    60.57k queries total, 17 unique
    Sorted by 't_sum'
    Grand Totals: Time 5.38M s, Lock 3.22M s, Rows sent 21.86M, Rows Examined 184.46M
    
    
    ______________________________________________________________________ 001 ___
    Count         : 25.59k  (42.24%)
    Time          : 3905525.574451 s total, 152.643069 s avg, 113.07488 s to 2720.338946 s max  (72.64%)
      95% of Time : 3260112.482495 s total, 134.12789 s avg, 113.07488 s to 282.366041 s max
    Lock Time (s) : 3168076.975558 s total, 123.820721 s avg, 108.548105 s to 311.639359 s max  (98.45%)
      95% of Lock : 2961933.212121 s total, 121.860167 s avg, 108.548105 s to 123.487106 s max
    Rows sent     : 0 avg, 0 to 0 max  (0.00%)
    Rows examined : 54 avg, 0 to 4.92k max  (0.75%)
    Database      :
    Users         :
            xx@ 192.168.10.147 : 10.65% (2724) of query, 10.26% (6215) of all users
            xx@ 192.168.10.209 : 10.33% (2643) of query, 10.16% (6156) of all users
            xx@ 192.168.10.205 : 10.16% (2599) of query, 9.97% (6036) of all users
            xx@ 192.168.10.211 : 10.13% (2591) of query, 9.98% (6042) of all users
            xx@ 192.168.10.207 : 9.93% (2541) of query, 9.95% (6024) of all users
            xx@ 192.168.10.161 : 9.83% (2515) of query, 9.84% (5960) of all users
            xx@ 192.168.10.149 : 9.81% (2510) of query, 9.95% (6028) of all users
            xx@ 192.168.10.215 : 9.76% (2498) of query, 9.85% (5963) of all users
            xx@ 192.168.10.168 : 9.71% (2485) of query, 9.69% (5868) of all users
            xx@ 192.168.10.213 : 9.69% (2480) of query, 9.66% (5851) of all users
    
    Query abstract:
    SET timestamp=N; UPDATE connectinfo SET devicetag='S', connectipaddress='S', updatedatetime=now() WHERE ID=N AND apptype=N;
    
    Query sample:
    SET timestamp=1443027797;
    update connectinfo set DeviceTag='1070A416AF000000', ConnectIPAddress='60.174.116.165', UpdateDatetime=now() where ID=5358 and AppType=0;
    
    ______________________________________________________________________ 002 ___
    Count         : 32.79k  (54.14%)
    Time          : 1344378.871914 s total, 40.99841 s avg, 2.000747 s to 1944.548192 s max  (25.01%)
      95% of Time : 587407.556704 s total, 18.85678 s avg, 2.000747 s to 233.465042 s max
    Lock Time (s) : 1512.917798 s total, 46.138 ms avg, 76 ▒s to 114.302 ms max  (0.05%)
      95% of Lock : 1414.978902 s total, 45.423 ms avg, 76 ▒s to 50.514 ms max
    Rows sent     : 1 avg, 1 to 1 max  (0.15%)
    Rows examined : 4.92k avg, 4.92k to 4.92k max  (87.41%)
    Database      :
    Users         :
            xx@ 192.168.10.209 : 10.24% (3359) of query, 10.16% (6156) of all users
            xx@ 192.168.10.149 : 10.16% (3331) of query, 9.95% (6028) of all users
            xx@ 192.168.10.147 : 10.11% (3315) of query, 10.26% (6215) of all users
            xx@ 192.168.10.211 : 10.03% (3288) of query, 9.98% (6042) of all users
            xx@ 192.168.10.207 : 10.02% (3285) of query, 9.95% (6024) of all users
            xx@ 192.168.10.161 : 9.97% (3268) of query, 9.84% (5960) of all users
            xx@ 192.168.10.215 : 9.96% (3266) of query, 9.85% (5963) of all users
            xx@ 192.168.10.205 : 9.92% (3254) of query, 9.97% (6036) of all users
            xx@ 192.168.10.168 : 9.86% (3234) of query, 9.69% (5868) of all users
            xx@ 192.168.10.213 : 9.73% (3191) of query, 9.66% (5851) of all users
    
    Query abstract:
    SET timestamp=N; SELECT * FROM connectinfo WHERE ID=N AND apptype=N ORDER BY createdatetime DESC LIMIT N;
    
    Query sample:
    SET timestamp=1443027795;
    select  *  from connectinfo where  ID=7646  and AppType=0  ORDER BY CreateDatetime DESC LIMIT 1;
    
    ______________________________________________________________________ 003 ___
    Count         : 842  (1.39%)
    Time          : 66663.314786 s total, 79.172583 s avg, 2.011408 s to 673.604537 s max  (1.24%)
      95% of Time : 56684.989954 s total, 70.944919 s avg, 2.011408 s to 193.623235 s max
    Lock Time (s) : 48221.988255 s total, 57.27077 s avg, 69 ▒s to 185.402303 s max  (1.50%)
      95% of Lock : 40627.196184 s total, 50.847555 s avg, 69 ▒s to 166.67704 s max
    Rows sent     : 0 avg, 0 to 0 max  (0.00%)
    Rows examined : 0 avg, 0 to 0 max  (0.00%)
    Database      :
    Users         :
            xx@ 192.168.10.207 : 11.64% (98) of query, 9.95% (6024) of all users
            xx@ 192.168.10.205 : 11.28% (95) of query, 9.97% (6036) of all users
            xx@ 192.168.10.213 : 10.93% (92) of query, 9.66% (5851) of all users
            xx@ 192.168.10.161 : 10.45% (88) of query, 9.84% (5960) of all users
            xx@ 192.168.10.149 : 10.33% (87) of query, 9.95% (6028) of all users
            xx@ 192.168.10.211 : 9.74% (82) of query, 9.98% (6042) of all users
            xx@ 192.168.10.147 : 9.38% (79) of query, 10.26% (6215) of all users
            xx@ 192.168.10.215 : 9.38% (79) of query, 9.85% (5963) of all users
            xx@ 192.168.10.168 : 9.03% (76) of query, 9.69% (5868) of all users
            xx@ 192.168.10.209 : 7.84% (66) of query, 10.16% (6156) of all users
    
    Query abstract:
    SET timestamp=N; INSERT INTO errormessage (id,servertype,msgtype,level,dev,content,eventtime,severname) VALUES ( 'S', 'S', 'S', 'S', 'S', 'S', 'S', 'S')1;
    
    

    使用mysqlsla可以看SQL语句的执行数量/比例,影响行数,用户,占比等。,从这里看很可能认为是connectinfo表(95%以上)引起,SHOW PROCESSLIST也是如此 。
    由于这个文件我是单独下载回来的,运行mysqlsla.pl时候碰到很多错误,逐个安装解决了

    [vagrant@centos64 mysql-log-filter-1.9]$  perl mysqlsla.pl today.log
    Can't locate Time/HiRes.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at mysqlsla.pl line 2092.
    BEGIN failed--compilation aborted at mysqlsla.pl line 2092.
    #解决方法
    [vagrant@centos64 mysql-log-filter-1.9]$ sudo yum install perl-Time-HiRes
    
    [vagrant@centos64 mysql-log-filter-1.9]$ perl -MCPAN -e 'install DBI'
    Can't locate CPAN.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .).
    BEGIN failed--compilation aborted.
    
    #解决方法
    [vagrant@centos64 mysql-log-filter-1.9]$ sudo yum install perl-DBI
    

    正确的方法是应该检测对应根目录下面的Makefile.PL,比如

    [vagrant@centos64 percona-toolkit-2.2.15]$ perl Makefile.PL
    #如果报以下错误,需要先安装对应模块,简单点就是
    #sudo yum install perl-devel
    Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
    BEGIN failed--compilation aborted at Makefile.PL line 1.
    

    然后安装对应模块,参考这里
    使用Percona公司的工具pt-query-digest来分析也得到了同mysqlsla类似的结果

    [vagrant@vagrant-centos64 bin]$ pt-query-digest ../../today4.log
    
    # 9.8s user time, 700ms system time, 21.05M rss, 73.66M vsz
    # Current date: Mon Oct  5 05:52:01 2015
    # Hostname: vagrant-centos64.vagrantup.com
    # Files: ../../today.log
    # Overall: 60.57k total, 17 unique, 4.54 QPS, 402.68x concurrency ________
    # Time range: 2015-09-23 22:17:29 to 2015-09-24 02:00:00
    # Attribute          total     min     max     avg     95%  stddev  median
    # ============     ======= ======= ======= ======= ======= ======= =======
    # Exec time        5376198s      2s   2720s     89s    258s    118s     57s
    # Lock time        3217840s       0    312s     53s    118s     60s    48ms
    # Rows sent         20.85M       0  20.82M  361.00    0.99  84.46k    0.99
    # Rows examine     175.91M       0  20.82M   2.97k   4.71k  84.48k   4.71k
    # Query size         7.85M      64     597  135.90  151.03   27.56  112.70
    
    # Profile
    # Rank Query ID           Response time      Calls R/Call   V/M   Item
    # ==== ================== ================== ===== ======== ===== ========
    #    1 0xF1132168DB0BFC57 3905525.5745 72.6% 25586 152.6431 61.61 UPDATE connectinfo
    #    2 0xD4B317E755A0ABD7 1344378.8719 25.0% 32791  40.9984 30... SELECT connectinfo
    #    3 0xE23849EE6FB19DAE   66663.3148  1.2%   842  79.1726 62.99 INSERT errormessage
    ...
    
    # Query 1: 7.52 QPS, 1.15kx concurrency, ID 0xF1132168DB0BFC57 at byte 16243195
    # This item is included in the report because it matches --limit.
    # Scores: V/M = 61.61
    # Time range: 2015-09-24 01:03:17 to 02:00:00
    # Attribute    pct   total     min     max     avg     95%  stddev  median
    # ============ === ======= ======= ======= ======= ======= ======= =======
    # Count         42   25586
    # Exec time     72 3905526s    113s   2720s    153s    271s     97s    124s
    # Lock time     98 3168077s    109s    312s    124s    118s     14s    118s
    # Rows sent      0       0       0       0       0       0       0       0
    # Rows examine   0   1.33M       0   4.80k   54.39       0  504.65       0
    # Query size    48   3.78M     149     157  154.94  151.03    0.52  151.03
    # String:
    # Hosts        192.168.10.147 (2724/10%)... 9 more
    # Users        gate
    # Query_time distribution
    #   1us
    #  10us
    # 100us
    #   1ms
    #  10ms
    # 100ms
    #    1s
    #  10s+  ################################################################
    # Tables
    #    SHOW TABLE STATUS LIKE 'connectinfo'\G
    #    SHOW CREATE TABLE `connectinfo`\G
    update connectinfo set DeviceTag='10705BDDCD000000', ConnectIPAddress='115.231.63.78', UpdateDatetime=now() where ID=6912 and AppType=0\G
    # Converted for EXPLAIN
    # EXPLAIN /*!50100 PARTITIONS*/
    select  DeviceTag='10705BDDCD000000', ConnectIPAddress='115.231.63.78', UpdateDatetime=now() from connectinfo where  ID=6912 and AppType=0\G
    

    PS:这个问题解决后没几天,数据库又出问题,以为又是慢SQL什么的,结果是交换机网口问题。。。数据传输太慢,导致主从不同步。。。

    参考链接:
    mysql 慢日志分析
    慢日志按时间截取
    MySQL 之 slow log
    MySQL优化—工欲善其事,必先利其器(2)
    日志常用统计技巧
    性能优化之MySQL优化(一)
    mysqlsla的安装与使用
    安装DBI组件。 Can’t locate DBI.pm in @INC-mysql接口
    Issue 12: Can’t locate Time/HiRes.pm
    打开MySQL的慢查询记录
    “Can’t locate ExtUtils/MakeMaker.pm” while compile git
    analysing slow MySQL queries with pt-query-digest
    mysqldump备份原理
    mysqldump –single-transaction, yet update queries are waiting for the backup
    mysql 利用binlog增量备份,还原实例
    MySQl备份恢复策略(完全+增量备份策略)

    MySQL秒杀优化

    今天学习了楼方鑫先生《基于SQL的秒杀解决方案》,讲解了如何定位和优化秒杀业务中问题。
    首先介绍了库存业务,库存可以分为前端库存,后端库存,实体库存。秒杀时,存在的主要问题

    • 库存数据不准确,下单、付款后,得知零库存;超卖或少卖
    • 废单较多,只下单不付款,转化率低
    • 热点商品,拖垮整个站

    秒杀过程中,需要解决的技术点包括

    • 余额减一
    • 操作明细,方便追溯对账,防止一个帐号多次参与
    • 完整事务,保障记录明细与扣减库存同时完成
    • 数据落地,内存数据不可靠

    针对库存技术要求,做了多个库存解决方案,比如Mysql + Read /Write Cache 。Read Cache方案不足是读有延迟影响用户体验;Write Cache方案存在多个APP写数据不一致性。Mysql + Cache + NoSQL方案则太复杂未实现。

    于是又重新回到优化Mysql上。Mysql优势在于事务机制成熟,程序稳定。存在技术难点:单行并发,热点商品,瞬间压力,前一分钟,千万用户,容易堵塞,拖垮网站。于是从以下几个方面进行优化

    • 事务优化,单行更新
    • 并发优化,最大并发数
    • 排队优化,抢同一商品

    分析秒杀时的处理逻辑,扫描系统代码,发现大部分程序都在等待确认Update记录数,才提交事务。

    • 开启事务
    • Insert库存明细
    • Update库存余额
    • 提交事务

    在良好设计下,Mysql的Insert操作,不使用自增列是不会阻塞请求。但是Mysql的Update同一条记录是串行的,需要等远程客户端发送提交命令后才能释放锁,让其他会话继续。简单的更新操作,不考虑IO和锁冲突,一条语句执行的时间大约是0.1ms,一般条件下的网络时延为0.4-0.8ms,即等待事务提交通知的时间比真正SQL执行的时间长数倍。
    于是扩展了SQL语法(OneSQL),指定在Update执行完后自动提交,不需要等待客户端发送提交命令,从而节约这一个网络来回的事务等待时间,提升网络性能。

    秒杀时如果遇到大量请求需要进行排队,以免太多的请求拖垮Mysql

    • 在应用层排队的缺点,应用需要改造,使用统一框架(需要考虑跨语言),应用集群扩容时,控制不准确(连接数分配)
    • 在Mysql排队的优点,应用改造极少,只需修改少量SQL语句,无需统一框架,排队精确,发挥InnoDB性能。

    于是开发了兼容Mysql的分布式数据访问层(OneProxy),为并发请求进行排队。

    另外,还对热点商品进行独立数据库拆分和优化。目前,双十一前商品便已挂出,用户可以收藏或预购,对于商家而言可以准备更多商品;对于平台而言可以预先发现热点商品做优化。

    总结,对于业务优化,需要循序渐进,深入了解业务逻辑和技术点,比较不同的解决方案,就算是平常的update操作也有优化空间;同时需要从其他方面进行特定优化,如高并发排队,热点数据分离等。

    除了后端数据库优化,对于秒杀抽奖业务,问题的解决核心就是控制单位时间内的流量,使其不超过后端的处理能力。前端的做法包括

    • 分批次(少量多次)进行秒杀
    • 先玩游戏再抢购,如抽奖
    • 随机过滤掉部分请求,仅部分进入系统,如1/10
    • 阈值控制,一旦达到阈值,不再接收新请求
    • 预约排号,未排号用户返回失败(用户分类)
    • 验证码验证

    另外,OneProxy 提供的连接池功能对于PHP非常有用。PHP运行在CGI下面,每一个请求到来便需要重新创建一个数据库连接与Mysql进行交互,并发量大量的情况下便会出现:too many connetion,乃至拖垮数据库:mysql server has gone away,影响其他业务。因此Mysql连接池,对于PHP显得非常重要。

    更新:小米网在开发抢购系统的时候,最早使用PHP + Mysql碰到了一些问题,例如并发性能,数据一致性,在OneSQL上面都已经做了改进优化,只是小米自己使用Go语言重构,开发大秒系统(BigTap)。

    参考链接:
    限量秒杀等高并发活动的正确性如何保证?
    MySQL 5.6.17/Percona5.6.16/MariaDB 10.0.11/OneSQL 5.6.16 TpmC测试
    由12306.cn谈谈网站性能技术
    “米粉节”背后的故事——小米网抢购系统开发实践
    Web系统大规模并发——电商秒杀与抢购
    OneProxy : 如何给PHP页面以及其他Ruby/Python/Go程序添加连接池功能?
    基于Swoole实现的Mysql连接池

    PHP安全问题

    乌云漏洞平台上发现很多开源PHP程序存在漏洞。大部分漏洞都是因为对PHP中的变量过滤不够严格和未转义造成,直接使用了$_GET,$_PUT,$_COOKIE传过来的值,造成了sql注入(sql injection)和跨站攻击(xss)。事实上这些漏洞完全可以避免的。
    先说sql注入。通常注入情况发生在参数过滤不严谨和拼接sql,所以尽量不要用sql拼接,尽量使用PDO或Mysqli。 继续阅读