Artifact Content
Not logged in

Artifact 537e9db804542fc1bf5fe8d0d96f933f1127d020:

Request - Simplified HTTP client

npm package

Build status Coverage Coverage Dependency Status Gitter

Super simple to use

Request is designed to be the simplest way possible to make http calls. It supports HTTPS and follows redirects by default.

js var request = require('request'); request('', function (error, response, body) { if (!error && response.statusCode == 200) { console.log(body) // Show the HTML for the Google homepage. } })

Table of contents

Request also offers convenience methods like request.defaults and, and there are lots of usage examples and several debugging techniques.


You can stream any response to a file stream.

js request('').pipe(fs.createWriteStream('doodle.png'))

You can also stream a file to a PUT or POST request. This method will also check the file extension against a mapping of file extensions to content-types (in this case application/json) and use the proper content-type in the PUT request (if the headers don’t already provide one).

js fs.createReadStream('file.json').pipe(request.put(''))

Request can also pipe to itself. When doing so, content-type and content-length are preserved in the PUT headers.

js request.get('').pipe(request.put(''))

Request emits a "response" event when a response is received. The response argument will be an instance of http.IncomingMessage.

js request .get('') .on('response', function(response) { console.log(response.statusCode) // 200 console.log(response.headers['content-type']) // 'image/png' }) .pipe(request.put(''))

To easily handle errors when streaming requests, listen to the error event before piping:

js request .get('') .on('error', function(err) { console.log(err) }) .pipe(fs.createWriteStream('doodle.png'))

Now let’s get fancy.

js http.createServer(function (req, resp) { if (req.url === '/doodle.png') { if (req.method === 'PUT') { req.pipe(request.put('')) } else if (req.method === 'GET' || req.method === 'HEAD') { request.get('').pipe(resp) } } })

You can also pipe() from http.ServerRequest instances, as well as to http.ServerResponse instances. The HTTP method, headers, and entity-body data will be sent. Which means that, if you don't really care about security, you can do:

js http.createServer(function (req, resp) { if (req.url === '/doodle.png') { var x = request('') req.pipe(x) x.pipe(resp) } })

And since pipe() returns the destination stream in ≥ Node 0.5.x you can do one line proxying. :)

js req.pipe(request('')).pipe(resp)

Also, none of this new functionality conflicts with requests previous features, it just expands them.

```js var r = request.defaults({'proxy':''})

http.createServer(function (req, resp) { if (req.url === '/doodle.png') { r.get('').pipe(resp) } }) ```

You can still use intermediate proxies, the requests will still follow HTTP forwards, etc.

back to top


request supports application/x-www-form-urlencoded and multipart/form-data form uploads. For multipart/related refer to the multipart API.

application/x-www-form-urlencoded (URL-Encoded Forms)

URL-encoded forms are simple.

js'', {form:{key:'value'}}) // or'').form({key:'value'}) // or{url:'', form: {key:'value'}}, function(err,httpResponse,body){ /* ... */ })

multipart/form-data (Multipart Form Uploads)

For multipart/form-data we use the form-data library by @felixge. For the most cases, you can pass your upload form data via the formData option.

js var formData = { // Pass a simple key-value pair my_field: 'my_value', // Pass data via Buffers my_buffer: new Buffer([1, 2, 3]), // Pass data via Streams my_file: fs.createReadStream(__dirname + '/unicycle.jpg'), // Pass multiple values /w an Array attachments: [ fs.createReadStream(__dirname + '/attachment1.jpg'), fs.createReadStream(__dirname + '/attachment2.jpg') ], // Pass optional meta-data with an 'options' object with style: {value: DATA, options: OPTIONS} // Use case: for some types of streams, you'll need to provide "file"-related information manually. // See the `form-data` README for more information about options: custom_file: { value: fs.createReadStream('/dev/urandom'), options: { filename: 'topsecret.jpg', contentType: 'image/jpg' } } };{url:'', formData: formData}, function optionalCallback(err, httpResponse, body) { if (err) { return console.error('upload failed:', err); } console.log('Upload successful! Server responded with:', body); });

For advanced cases, you can access the form-data object itself via r.form(). This can be modified until the request is fired on the next cycle of the event-loop. (Note that this calling form() will clear the currently set form data for that request.)

js // NOTE: Advanced use-case, for normal use see 'formData' usage above var r ='', function optionalCallback(err, httpResponse, body) {...}) var form = r.form(); form.append('my_field', 'my_value'); form.append('my_buffer', new Buffer([1, 2, 3])); form.append('custom_file', fs.createReadStream(__dirname + '/unicycle.jpg'), {filename: 'unicycle.jpg'}); See the form-data README for more information & examples.


Some variations in different HTTP implementations require a newline/CRLF before, after, or both before and after the boundary of a multipart/related request (using the multipart option). This has been observed in the .NET WebAPI version 4.0. You can turn on a boundary preambleCRLF or postamble by passing them as true to your request options.

js request({ method: 'PUT', preambleCRLF: true, postambleCRLF: true, uri: '', multipart: [ { 'content-type': 'application/json', body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) }, { body: 'I am an attachment' }, { body: fs.createReadStream('image.png') } ], // alternatively pass an object containing additional options multipart: { chunked: false, data: [ { 'content-type': 'application/json', body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) }, { body: 'I am an attachment' } ] } }, function (error, response, body) { if (error) { return console.error('upload failed:', error); } console.log('Upload successful! Server responded with:', body); })

back to top

HTTP Authentication

js request.get('').auth('username', 'password', false); // or request.get('', { 'auth': { 'user': 'username', 'pass': 'password', 'sendImmediately': false } }); // or request.get('').auth(null, null, true, 'bearerToken'); // or request.get('', { 'auth': { 'bearer': 'bearerToken' } });

If passed as an option, auth should be a hash containing values:

The method form takes parameters auth(username, password, sendImmediately, bearer).

sendImmediately defaults to true, which causes a basic or bearer authentication header to be sent. If sendImmediately is false, then request will retry with a proper authentication header after receiving a 401 response from the server (which must contain a WWW-Authenticate header indicating the required authentication method).

Note that you can also specify basic authentication using the URL itself, as detailed in RFC 1738. Simply pass the user:password before the host with an @ sign:

```js var username = 'username', password = 'password', url = 'http://' + username + ':' + password + '';

request({url: url}, function (error, response, body) { // Do more stuff with 'body' here }); ```

Digest authentication is supported, but it only works with sendImmediately set to false; otherwise request will send basic authentication on the initial request, which will probably cause the request to fail.

Bearer authentication is supported, and is activated when the bearer value is available. The value may be either a String or a Function returning a String. Using a function to supply the bearer token is particularly useful if used in conjunction with defaults to allow a single function to supply the last known token at the time of sending a request, or to compute one on the fly.

back to top

Custom HTTP Headers

HTTP Headers, such as User-Agent, can be set in the options object. In the example below, we call the github API to find out the number of stars and forks for the request repository. This requires a custom User-Agent header as well as https.

```js var request = require('request');

var options = { url: '', headers: { 'User-Agent': 'request' } };

function callback(error, response, body) { if (!error && response.statusCode == 200) { var info = JSON.parse(body); console.log(info.stargazerscount + " Stars"); console.log(info.forkscount + " Forks"); } }

request(options, callback); ```

back to top

OAuth Signing

OAuth version 1.0 is supported. The default signing algorithm is HMAC-SHA1:

```js // OAuth1.0 - 3-legged server side flow (Twitter example) // step 1 var qs = require('querystring') , oauth = { callback: '' , consumerkey: CONSUMERKEY , consumersecret: CONSUMERSECRET } , url = '' ;{url:url, oauth:oauth}, function (e, r, body) { // Ideally, you would take the body in the response // and construct a URL that a user clicks on (like a sign in button). // The verifier is only available in the response after a user has // verified with twitter that they are authorizing your app.

// step 2 var reqdata = qs.parse(body) var uri = '' + '?' + qs.stringify({oauthtoken: reqdata.oauthtoken}) // redirect the user to the authorize uri

// step 3 // after the user is redirected back to your server var authdata = qs.parse(body) , oauth = { consumerkey: CONSUMERKEY , consumersecret: CONSUMERSECRET , token: authdata.oauthtoken , tokensecret: reqdata.oauthtokensecret , verifier: authdata.oauthverifier } , url = '' ;{url:url, oauth:oauth}, function (e, r, body) { // ready to make signed requests on behalf of the user var permdata = qs.parse(body) , oauth = { consumerkey: CONSUMERKEY , consumersecret: CONSUMERSECRET , token: permdata.oauthtoken , tokensecret: permdata.oauthtokensecret } , url = '' , qs = { screenname: permdata.screenname , userid: permdata.user_id } ; request.get({url:url, oauth:oauth, qs:qs, json:true}, function (e, r, user) { console.log(user) }) }) }) ```

For RSA-SHA1 signing, make the following changes to the OAuth options object: * Pass signature_method : 'RSA-SHA1' * Instead of consumer_secret, specify a private_key string in PEM format

For PLAINTEXT signing, make the following changes to the OAuth options object: * Pass signature_method : 'PLAINTEXT'

To send OAuth parameters via query params or in a post body as described in The Consumer Request Parameters section of the oauth1 spec: * Pass transport_method : 'query' or transport_method : 'body' in the OAuth options object. * transport_method defaults to 'header'

To use Request Body Hash you can either * Manually generate the body hash and pass it as a string body_hash: '...' * Automatically generate the body hash by passing body_hash: true

back to top


If you specify a proxy option, then the request (and any subsequent redirects) will be sent via a connection to the proxy server.

If your endpoint is an https url, and you are using a proxy, then request will send a CONNECT request to the proxy server first, and then use the supplied connection to connect to the endpoint.

That is, first it will make a request like:

HTTP/1.1 CONNECT Host: User-Agent: whatever user agent you specify

and then the proxy server make a TCP connection to endpoint-server on port 80, and return a response that looks like:

HTTP/1.1 200 OK

At this point, the connection is left open, and the client is communicating directly with the machine.

See the wikipedia page on HTTP Tunneling for more information.

By default, when proxying http traffic, request will simply make a standard proxied http request. This is done by making the url section of the initial line of the request a fully qualified url to the endpoint.

For example, it will make a single request that looks like:

``` HTTP/1.1 GET Host: Other-Headers: all go here

request body or whatever ```

Because a pure "http over http" tunnel offers no additional security or other features, it is generally simpler to go with a straightforward HTTP proxy in this case. However, if you would like to force a tunneling proxy, you may set the tunnel option to true.

You can also make a standard proxied http request by explicitly setting tunnel : false, but note that this will allow the proxy to see the traffic to/from the destination server.

If you are using a tunneling proxy, you may set the proxyHeaderWhiteList to share certain headers with the proxy.

You can also set the proxyHeaderExclusiveList to share certain headers only with the proxy and not with destination host.

By default, this set is:

accept accept-charset accept-encoding accept-language accept-ranges cache-control content-encoding content-language content-length content-location content-md5 content-range content-type connection date expect max-forwards pragma proxy-authorization referer te transfer-encoding user-agent via

Note that, when using a tunneling proxy, the proxy-authorization header and any headers from custom proxyHeaderExclusiveList are never sent to the endpoint server, but only to the proxy server.

Controlling proxy behaviour using environment variables

The following environment variables are respected by request:

When HTTP_PROXY / http_proxy are set, they will be used to proxy non-SSL requests that do not have an explicit proxy configuration option present. Similarly, HTTPS_PROXY / https_proxy will be respected for SSL requests that do not have an explicit proxy configuration option. It is valid to define a proxy in one of the environment variables, but then override it for a specific request, using the proxy configuration option. Furthermore, the proxy configuration option can be explicitly set to false / null to opt out of proxying altogether for that request.

request is also aware of the NO_PROXY/no_proxy environment variables. These variables provide a granular way to opt out of proxying, on a per-host basis. It should contain a comma separated list of hosts to opt out of proxying. It is also possible to opt of proxying when a particular destination port is used. Finally, the variable may be set to * to opt out of the implicit proxy configuration of the other environment variables.

Here's some examples of valid no_proxy values:

back to top

UNIX Domain Sockets

request supports making requests to UNIX Domain Sockets. To make one, use the following URL scheme:

js /* Pattern */ 'http://unix:SOCKET:PATH' /* Example */ request.get('http://unix:/absolute/path/to/unix.socket:/request/path')

Note: The SOCKET path is assumed to be absolute to the root of the host file system.

back to top

TLS/SSL Protocol

TLS/SSL Protocol options, such as cert, key and passphrase, can be set directly in options object, in the agentOptions property of the options object, or even in https.globalAgent.options. Keep in mind that, although agentOptions allows for a slightly wider range of configurations, the recommended way is via options object directly, as using agentOptions or https.globalAgent.options would not be applied in the same way in proxied environments (as data travels through a TLS connection instead of an http/https agent).

```js var fs = require('fs') , path = require('path') , certFile = path.resolve(dirname, 'ssl/client.crt') , keyFile = path.resolve(dirname, 'ssl/client.key') , caFile = path.resolve(__dirname, 'ssl/ca.cert.pem') , request = require('request');

var options = { url: '', cert: fs.readFileSync(certFile), key: fs.readFileSync(keyFile), passphrase: 'password', ca: fs.readFileSync(caFile) } };

request.get(options); ```

Using options.agentOptions

In the example below, we call an API requires client side SSL certificate (in PEM format) with passphrase protected private key (in PEM format) and disable the SSLv3 protocol:

```js var fs = require('fs') , path = require('path') , certFile = path.resolve(dirname, 'ssl/client.crt') , keyFile = path.resolve(dirname, 'ssl/client.key') , request = require('request');

var options = { url: '', agentOptions: { cert: fs.readFileSync(certFile), key: fs.readFileSync(keyFile), // Or use pfx property replacing cert and key when using private key, certificate and CA certs in PFX or PKCS12 format: // pfx: fs.readFileSync(pfxFilePath), passphrase: 'password', securityOptions: 'SSLOPNO_SSLv3' } };

request.get(options); ```

It is able to force using SSLv3 only by specifying secureProtocol:

js request.get({ url: '', agentOptions: { secureProtocol: 'SSLv3_method' } });

It is possible to accept other certificates than those signed by generally allowed Certificate Authorities (CAs). This can be useful, for example, when using self-signed certificates. To require a different root certificate, you can specify the signing CA by adding the contents of the CA's certificate file to the agentOptions. The certificate the domain presents must be signed by the root certificate specified:

js request.get({ url: '', agentOptions: { ca: fs.readFileSync('ca.cert.pem') } });

back to top

Support for HAR 1.2

The options.har property will override the values: url, method, qs, headers, form, formData, body, json, as well as construct multipart data and read files from disk when request.postData.params[].fileName is present without a matching value.

a validation step will check if the HAR Request format matches the latest spec (v1.2) and will skip parsing if not matching.

```js var request = require('request') request({ // will be ignored method: 'GET', uri: '',

// HTTP Archive Request Object
har: {
  url: '',
  method: 'POST',
  headers: [
      name: 'content-type',
      value: 'application/x-www-form-urlencoded'
  postData: {
    mimeType: 'application/x-www-form-urlencoded',
    params: [
        name: 'foo',
        value: 'bar'
        name: 'hello',
        value: 'world'


// a POST request will be sent to // with body an application/x-www-form-urlencoded body: // foo=bar&hello=world ```

back to top

request(options, callback)

The first argument can be either a url or an options object. The only required option is uri; all others are optional.

The callback argument gets 3 arguments:

  1. An error when applicable (usually from http.ClientRequest object)
  2. An http.IncomingMessage object
  3. The third is the response body (String or Buffer, or JSON object if the json option is supplied)

back to top

Convenience methods

There are also shorthand methods for different HTTP METHODs and some other conveniences.


This method returns a wrapper around the normal request API that defaults to whatever options you pass to it.

Note: request.defaults() does not modify the global request API; instead, it returns a wrapper that has your default settings applied to it.

Note: You can call .defaults() on the wrapper that is returned from request.defaults to add/override defaults that were previously defaulted.

For example: ```js //requests using baseRequest() will set the 'x-token' header var baseRequest = request.defaults({ headers: {'x-token': 'my-token'} })

//requests using specialRequest() will include the 'x-token' header set in //baseRequest and will also include the 'special' header var specialRequest = baseRequest.defaults({ headers: {special: 'special value'} }) ```


Same as request(), but defaults to method: "PUT".

js request.put(url)


Same as request(), but defaults to method: "PATCH".

js request.patch(url)

Same as request(), but defaults to method: "POST".



Same as request(), but defaults to method: "HEAD".

js request.head(url)


Same as request(), but defaults to method: "DELETE".

js request.del(url)


Same as request() (for uniformity).

js request.get(url)


Function that creates a new cookie.

js request.cookie('key1=value1')


Function that creates a new cookie jar.

js request.jar()

back to top


There are at least three ways to debug the operation of request:

  1. Launch the node process like NODE_DEBUG=request node script.js (lib,request,otherlib works too).

  2. Set require('request').debug = true at any time (this does the same thing as #1).

  3. Use the request-debug module to view request and response headers and bodies.

back to top


Most requests to external servers should have a timeout attached, in case the server is not responding in a timely manner. Without a timeout, your code may have a socket open/consume resources for minutes or more.

There are two main types of timeouts: connection timeouts and read timeouts. A connect timeout occurs if the timeout is hit while your client is attempting to establish a connection to a remote machine (corresponding to the connect() call on the socket). A read timeout occurs any time the server is too slow to send back a part of the response.

These two situations have widely different implications for what went wrong with the request, so it's useful to be able to distinguish them. You can detect timeout errors by checking err.code for an 'ETIMEDOUT' value. Further, you can detect whether the timeout was a connection timeout by checking if the err.connect property is set to true.

js request.get('', {timeout: 1500}, function(err) { console.log(err.code === 'ETIMEDOUT'); // Set to `true` if the timeout was a connection timeout, `false` or // `undefined` otherwise. console.log(err.connect === true); process.exit(0); });


js var request = require('request') , rand = Math.floor(Math.random()*100000000).toString() ; request( { method: 'PUT' , uri: '' + rand , multipart: [ { 'content-type': 'application/json' , body: JSON.stringify({foo: 'bar', _attachments: {'message.txt': {follows: true, length: 18, 'content_type': 'text/plain' }}}) } , { body: 'I am an attachment' } ] } , function (error, response, body) { if(response.statusCode == 201){ console.log('document saved as:'+ rand) } else { console.log('error: '+ response.statusCode) console.log(body) } } )

For backwards-compatibility, response compression is not supported by default. To accept gzip-compressed responses, set the gzip option to true. Note that the body data passed through request is automatically decompressed while the response object is unmodified and will contain compressed data if the server sent a compressed response.

js var request = require('request') request( { method: 'GET' , uri: '' , gzip: true } , function (error, response, body) { // body is the decompressed response body console.log('server encoded the data as: ' + (response.headers['content-encoding'] || 'identity')) console.log('the decoded data is: ' + body) } ).on('data', function(data) { // decompressed data as it is received console.log('decoded chunk: ' + data) }) .on('response', function(response) { // unmodified http.IncomingMessage object response.on('data', function(data) { // compressed data as it is received console.log('received ' + data.length + ' bytes of compressed data') }) })

Cookies are disabled by default (else, they would be used in subsequent requests). To enable cookies, set jar to true (either in defaults or options).

js var request = request.defaults({jar: true}) request('', function () { request('') })

To use a custom cookie jar (instead of request’s global cookie jar), set jar to an instance of request.jar() (either in defaults or options)

js var j = request.jar() var request = request.defaults({jar:j}) request('', function () { request('') })


js var j = request.jar(); var cookie = request.cookie('key1=value1'); var url = ''; j.setCookie(cookie, url); request({url: url, jar: j}, function () { request('') })

To use a custom cookie store (such as a FileCookieStore which supports saving to and restoring from JSON files), pass it as a parameter to request.jar():

js var FileCookieStore = require('tough-cookie-filestore'); // NOTE - currently the 'cookies.json' file must already exist! var j = request.jar(new FileCookieStore('cookies.json')); request = request.defaults({ jar : j }) request('', function() { request('') })

The cookie store must be a tough-cookie store and it must support synchronous operations; see the CookieStore API docs for details.

To inspect your cookie jar after a request:

js var j = request.jar() request({url: '', jar: j}, function () { var cookie_string = j.getCookieString(url); // "key1=value1; key2=value2; ..." var cookies = j.getCookies(url); // [{key: 'key1', value: 'value1', domain: "", ...}, ...] })

back to top