本文介绍了Nginx PHP 上传大文件失败(超过 6 GB)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在上传超过 6GB 的大文件时遇到了一个非常奇怪的问题.我的流程是这样的:

  1. 文件通过 Ajax 上传到 php 脚本.
  2. PHP 上传脚本获取 $_FILE 并将其分块复制,如

    上传大文件时出现几个问题:

    • HTTP 正文请求转储到磁盘并传递给后端,后端处理和复制文件
    • 无法在 HTTP 请求内容上传到服务器之前验证请求
    • 虽然上传大文件后端很少需要立即获取文件内容

    让我们从配置新位置的 Nginx 开始

    让我们看看它是如何实现的:

    1) 将 Nginx 配置为将 HTTP 正文内容转储到文件中并将其保存在 client_body_in_file_only 上;

    2) 创建新的后端端点

    无论初始 POST Content-Length 大小如何,预上传身份验证逻辑都可以防止未经身份验证的请求.

    I am having a very weird issue uploading larges files over 6GB. My process works like this:

    1. Files are uploaded via Ajax to an php script.
    2. The PHP upload script takes the $_FILE and copies it over in chunks, as in this answer to a tmp location.
    3. The location of the file is stored in the db
    4. A cron script will upload the file to s3 at a later time, again using fopen functions and buffering to keep memory usage low

    My PHP(HHVM) and NGINX configuration both have their configuration set to allow up to 16GB of file, my test file is only 8GB.

    Here is the weird part, the ajax will ALWAYS time out. But the file is successfully uploaded, its gets copied to the tmp location, the location stored in the db, s3, etc. But the AJAX runs for an hour even AFTER all the execution is finished(which takes 10-15 minutes) and only ends when timing out.

    What can be causing the server not send a response for only large files?

    Also error logs on server side are empty.

    解决方案

    A large file upload is an expensive and error prone operation. Nginx and backend should have correct timeout configured to handle slow disk IO if occur. Theoretically it is straightforward to manage file upload using multipart/form-data encoding RFC 1867.

    According to developer.mozilla.org in a multipart/form-data body, the HTTP Content-Disposition general header is a header that can be used on the subpart of a multipart body to give information about the field it applies to. The subpart is delimited by the boundary defined in the Content-Type header. Used on the body itself, Content-Disposition has no effect.

    Let's see what happens while file being uploaded:

    1) client sends HTTP request with the file content to webserver

    2) webserver accepts the request and initiates data transfer (or returns error 413 if the file size is exceed the limit)

    3) webserver starts to populate buffers (depends on file and buffers size)

    4) webserver sends file content via file/network socket to backend

    5) backend authenticates initial request

    6) backend reads the file and cuts headers (Content-Disposition, Content-Type)

    7) backend dumps resulted file on to disk

    8) any follow up procedures like database changes

    During large files upload several problems occur:

    • the HTTP body request dumps on to disk and passes to backend which process and copy the file
    • not possible to authenticate request before HTTP request content is uploaded to server
    • while upload large files backend rarely requires a file content itself immediately

    Let's start with Nginx configured with new location http://backend/upload to receive large file upload, back-end interaction is minimised (Content-Legth: 0), file is being stored just to disk. Using buffers Nginx dumps the file to disk (a file stored to the temporary directory with random name, it can not be changed) followed by POST request to backend to location http://backend/file with the file name in X-File-Name header.

    To keep extra information you may use headers with initial POST request. For instance, having X-Original-File-Name headers from initial requests help you to match file and store necessary mapping information to the database.

    Let's see how make it happen:

    1) configure Nginx to dump HTTP body content to a file and keep it stored client_body_in_file_only on;

    2) create new backend endpoint http://backend/file to handle mapping between temp file name and header X-File-Name

    4) instrument AJAX query with header X-File-Name Nginx will use to send post upload request with

    Configuration:

    location /upload {
      client_body_temp_path      /tmp/;
      client_body_in_file_only   on;
      client_body_buffer_size    1M;
      client_max_body_size       7G;
    
      proxy_pass_request_headers on;
      proxy_set_header           X-File-Name $request_body_file;
      proxy_set_body             off;
      proxy_redirect             off;
      proxy_pass                 http://backend/file;
    }
    

    If you need to have back-end authentication, only way to handle is to use auth_request, for instance:

    location = /upload {
      auth_request               /upload/authenticate;
      ...
    }
    
    location = /upload/authenticate {
      internal;
      proxy_set_body             off;
      proxy_pass                 http://backend;
    }
    

    Pre-upload authentication logic protects from unauthenticated requests regardless of the initial POST Content-Length size.

    这篇关于Nginx PHP 上传大文件失败(超过 6 GB)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-11 19:23