Skip to content
Advertisement

Scrapy 404 Error – FormRequest redirecting problem on BrickSeek website

I am currently trying to login brickseek’s website using FormRequest method but I am unable to login successfully. I keep on getting 404 error when using the scrapy crawl command. It seems to me that scrapy is redirecting the page incorrectly. I also noticed that my login and password are inputted in the redirected webpage which is weird.

The first page should be https://brickseek.com/login with the login information added to that page. Then once logged in the second page should be https://brickseek.com/account

It keeps redirecting me to https://brickseek.com/products/?search=&log=12345678&pwd=12345678

Here is the code:

import scrapy
from scrapy.http import FormRequest
from scrapy.utils.response import open_in_browser
from ..items import QuotetutorialItem


class QuoteSpider(scrapy.Spider):
    """Enter Login information located in Inspect Element, find Network, then search for login."""
    name = 'login'
    start_urls = ['https://www.brickseek.com/login/']

    def parse(self, response):
        """Enter Login information located in Inspect Element, find Network, then search for login."""
        return FormRequest.from_response(response, formdata={'log': '12345678', 'pwd': '12345678'},
                                         callback=self.after_login)

    def after_login(self, response):
        """Enter what you want scrapy to web scrape."""
        open_in_browser(response)

        items = QuotetutorialItem()

        div_title = response.css('div.banner_content')
        title = div_title.css('span.banner__title-text::text').extract()
        items['title']: title

        yield items

When I enter command scrapy crawl login, I get this:

(venv) C:UsersTreborPycharmProjectsNewScraperquotetutorialquotetutorial>scrapy crawl login
2020-02-07 00:19:27 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: quotetutorial)
2020-02-07 00:19:27 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.8.0 (tags/v3.8.0:fa919
fd, Oct 14 2019, 19:21:23) [MSC v.1916 32 bit (Intel)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Windows-10-10.0.18362-SP0
2020-02-07 00:19:28 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'quotetutorial', 'NEWSPIDER_MODULE': 'quotetutorial.spiders', 'SPIDER_MODULES': ['quotetutorial.spid
ers'], 'USER_AGENT': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36'}
2020-02-07 00:19:28 [scrapy.extensions.telnet] INFO: Telnet Password: e8e844c7cbefd148
2020-02-07 00:19:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2020-02-07 00:19:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-07 00:19:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-07 00:19:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-07 00:19:28 [scrapy.core.engine] INFO: Spider opened
2020-02-07 00:19:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-07 00:19:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-07 00:19:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.brickseek.com/login/> (referer: None)
2020-02-07 00:19:29 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://brickseek.com/products/?search=&user_login=12345678&user_pass=12345678>
from <GET https://www.brickseek.com/products/?search=&user_login=12345678&user_pass=12345678>
2020-02-07 00:19:31 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://brickseek.com/products/?search=&user_login=12345678&user_pass=12345678> (referer: https://www.bric
kseek.com/login/)
2020-02-07 00:19:31 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <404 https://brickseek.com/products/?search=&user_login=12345678&user_pass=12345678>: HTTP s
tatus code is not handled or not allowed
2020-02-07 00:19:31 [scrapy.core.engine] INFO: Closing spider (finished)
2020-02-07 00:19:31 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1329,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 41132,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/301': 1,
 'downloader/response_status_count/404': 1,
 'elapsed_time_seconds': 2.460197,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 2, 7, 6, 19, 31, 193946),
 'httperror/response_ignored_count': 1,
 'httperror/response_ignored_status_count/404': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 11,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2020, 2, 7, 6, 19, 28, 733749)}
2020-02-07 00:19:31 [scrapy.core.engine] INFO: Spider closed (finished)

cURL code:

import requests

cookies = {
    '__cfduid': 'da5ab5ff9b55ed8d9432e500501554604963',
    '__stripe_mid': '3689131-200-d1-9906-c3022a5de64',
    'wordpress_test_cookie': 'WP+Cookie+check',
    'ac_enable_tracking': '1',
    '_ga': 'GA1.2.114763001.15586575',
    '_hjid': '72b2d2b9-feef-ee-827a-2af3fe09ed',
    '__zlcmid': 'tZiQfA1Ce4mixx',
    '_form_1_': 'Sun Aug 04 2019 05:45:27 GMT-0500 (Central Daylight Time)',
    '_hjIncludedInSample': '1',
    'WRMCembrGyQs': '1L.OZR%40s7hny',
    'CpwgInmYKPQHM': 'mqJCzckWg7YOTl',
    'de-EnfISbN': '%5Dj_2Wqrp8a',
    'pfICHvSKW_lZr': 'ykbU6BaGwi0cS',
    '_gid': 'GA1.2.83233215.150787155',
    'bs_zip': '*****',
    'ABTasty': 'uid%3D190840545159480%26fst%3D15491552621%26pst%3D1579541792%26cs%3D1581042402480%26ns%3D8%26pvt%3D8%26pvs%3D1%26th%3D',
    'cf_clearance': '7bc4c20678aa9d74cbd8517348a72f0451dcf8-1581043791-0-250',
    '_gat_UA-55274071-1': '1',
    'wordpress_logged_in_03f9e3712464d83801b8eecd7edae': '12345678%7C151232373%7C1WXUNJmFaDGPkxwbJiPCmftQZEWYuXsRUe0qFM8kT%7C041b932ee01c4e7f5864d918a678084f4f543de2a532c64322a72ca5d39c40',
}

headers = {
    'authority': 'brickseek.zendesk.com',
    'cache-control': 'max-age=0',
    'origin': 'https://brickseek.com',
    'upgrade-insecure-requests': '1',
    'content-type': 'application/json; charset=UTF-8',
    'user-agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Mobile Safari/537.36',
    'sec-fetch-user': '?1',
    'accept': '*/*',
    'sec-fetch-site': 'cross-site',
    'sec-fetch-mode': 'cors',
    'referer': 'https://brickseek.com/',
    'accept-encoding': 'gzip, deflate, br',
    'accept-language': 'en-US,en;q=0.9',
    'Referer': 'https://brickseek.com/',
    'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Mobile Safari/537.36',
    'cookie': '__cfduid=db073c10d776e79a06f2c8849e128a601554604964; TS011baee6=0130aff232f157d75c3e3c4e190122d294ad62f857be4c386f8fa205f622df7166e3284eeec932768fd9cf24cefd766096438d32; TS01a8b35a=01da6997013403b00a91c12774d32da66071ff49f68f15b5910144b1419a5eebec6ac1031cd95090f3216192aed2407a71e9a0cca',
    'if-none-match': 'W/"abd82b40b936e162ba2fd12ebca21"',
    'if-modified-since': 'Thu, 03 Oct 2019 19:25:17 GMT',
    'Origin': 'https://brickseek.com',
    'authorization': 'd62c460eb96a27a6dc5664ae30a091',
    'Upgrade-Insecure-Requests': '1',
    'Pragma': 'no-cache',
    'Accept-Encoding': 'identity;q=1, *;q=0',
    'Accept-Language': 'en-US,en;q=0.9',
    'Sec-WebSocket-Key': 'AtleA0fubdJ/pCYYR+0lQ==',
    'Upgrade': 'websocket',
    'Sec-WebSocket-Extensions': 'permessage-deflate; client_max_window_bits',
    'Cache-Control': 'no-cache',
    'Connection': 'Upgrade',
    'Sec-WebSocket-Version': '13',
    'Range': 'bytes=0-',
}

data = [
  ('log', '12345678'),
  ('pwd', '12345678'),
  ('wp-submit', 'Log In'),
  ('redirect_to', 'https://brickseek.com/'),
  ('mepr_process_login_form', 'true'),
  ('mepr_is_login_page', 'true'),
  ('_LkNiouJZhOFzHR', 'H4I_a9ZkfdCDPoj'),
  ('_LkNiouJZhOFzHR', 'H4I_a9ZkfdCDPoj'),
  ('YKJmh-NQXRuD', 'O[HDXTUysdu9'),
  ('YKJmh-NQXRuD', 'O[HDXTUysdu9'),
  ('EmWpdPiAGnc', 'ybBQluhGrT5'),
  ('EmWpdPiAGnc', 'ybBQluhGrT5'),
  ('DsdaYLSwTt', 'bWTpV]1ym3'),
  ('DsdaYLSwTt', 'bWTpV]1ym3'),
]

response = requests.post('https://brickseek.com/login/', headers=headers, cookies=cookies, data=data)

Advertisement

Answer

# -*- coding: utf-8 -*-
import scrapy
from scrapy.utils.response import open_in_browser
data={'log': 'YOUR_USER', 'pwd': 'YOUR_PASSWORD', 'wp-submit': 'Log In', 
'redirect_to': 'https://brickseek.com/', 'mepr_process_login_form': 'true', 
'mepr_is_login_page': 'true', '_LkNiouJZhOFzHR': 'H4I_a9ZkfdCDPoj', 
'YKJmh-NQXRuD': 'O[HDXTUysdu9', 'EmWpdPiAGnc': 'ybBQluhGrT5', 'DsdaYLSwTt': 'bWTpV]1ym3'}

headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
    'Accept-Language': 'en-US,en;q=0.5',
    'Content-Type': 'application/x-www-form-urlencoded',
    'Origin': 'https://brickseek.com',
    'Connection': 'keep-alive',
    'Referer': 'https://brickseek.com/login/',
    'Upgrade-Insecure-Requests': '1',
    'TE': 'Trailers',
}

class BrickseekSpider(scrapy.Spider):
    name = 'brickseek'
    allowed_domains = ['brickseek.com']

    def start_requests(self):
        yield scrapy.http.FormRequest(url='https://brickseek.com/login/',headers=headers,formdata=data,
        callback=self.parse)

    def parse(self,response):
        open_in_browser(response)
        #your code

Full code to do it

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement