为什么爬
- 机器翻译是一个极度依赖数据量的任务,尤其在一些垂直领域,在数据量不足的情况下,模型会严重过拟合,inference阶段输出会很不稳定。
- 想要缓解这样的问题,一方面,可以搜罗更多的语料,例如 WMT,OPUS 都是不错的来源,不过可能需要fan个墙,不然下载速度可能不忍直视。另一方面,可以利用Improving Neural Machine Translation Models with Monolingual Data 中提出的回译技巧,借助大量单语言语料和第三方翻译接口构建扩充的语料;
- 回译技巧也就是NLP数据増广的技巧之一,竞赛刷分利器;
适用场景
- 数据量大,QPS高,垂直领域(医药,电子,水利)
- 如果你只需要翻译几万条通用领域数据,那么可以直接调用百度翻译开放平台官方API
怎么爬
request_url
https://fanyi.baidu.com/v2transapi
form data
from: en
to: zh
query: machine translation
transtype: translang
simple_means_flag: 3
sign: 256172.461725
token: 4a634f2d5f893a75d6f18642e0e4b224
domain: common
IP池
反爬的一种重要手段就是关注IP访问量与频率,所以动态IP池是爬虫的必要装备。
token获取
token是用来验证请求有效性的,是在请求百度翻译主页面返回的,隐藏在
window['common'] = {
token: '4a634f2d5f893a75d6f18642e0e4b224',
...
}
首先需要构建一个token池,如果直接用python中的requests包请求,纵使使用了动态IP代理,依然无法获得大量有效token。所以转而利用selenium模拟浏览器来获取token,毕竟需要的token数量并不是很多。保存token同时也把保存了cookie
option = webdriver.ChromeOptions()
option.add_argument("--start-maximized")
# proxy_auth_plugin_path是在selenium中使用动态IP代理时阿布云提供的扩展, 见https://www.abuyun.com/http-proxy/pro-manual-selenium.html
option.add_extension(proxy_auth_plugin_path)
driver = webdriver.Chrome(chrome_options=option)
driver.get("https://fanyi.baidu.com")
driver.find_element_by_xpath('//*[@id="baidu_translate_input"]').send_keys("你好")
time.sleep(4)
cookie_str = ""
for item in driver.get_cookies():
cookie_str = item["name"] + "=" + item["value"] + ";" + cookie_str
print(cookie_str)
html = driver.page_source
li = re.search(r"<script>\s*window\[\'common\'\] = ([\s\S]*?)</script>", html)
token = re.search(r"token: \'([a-zA-Z0-9]+)\',", li.group(1))
token_str = token.group(1)
print(token_str)
driver.close()
sign获取
sign的值是根据待翻译的字符换通过JS计算得到的,只需要找到JS代码,利用python中的PyExecJS运行JS代码
class Py4Js_baidu():
def __init__(self):
self.ctx = execjs.compile("""
var i = "320305.131321201"
function a(r){if(Array.isArray(r)){for(var o=0,t=Array(r.length);o<r.length;o++)t[o]=r[o];
return t}return Array.from(r)}
function n(r,o){for(var t=0;t<o.length-2;t+=3){var a=o.charAt(t+2);a=a>="a"?a.charCodeAt(0)-87:Number(a),a="+"===o.charAt(t+1)?r>>>a:r<<a,r="+"===o.charAt(t)?r+a&4294967295:r^a
}return r}
function e(r) {
var o = r.match(/[\uD800-\uDBFF][\uDC00-\uDFFF]/g);
if (null === o) {
var t = r.length;
t > 30 && (r = "" + r.substr(0, 10) + r.substr(Math.floor(t / 2) - 5, 10) + r.substr( - 10, 10))
} else {
for (var e = r.split(/[\uD800-\uDBFF][\uDC00-\uDFFF]/), C = 0, h = e.length, f = []; h > C; C++)"" !== e[C] && f.push.apply(f, a(e[C].split(""))),
C !== h - 1 && f.push(o[C]);
var g = f.length;
g > 30 && (r = f.slice(0, 10).join("") + f.slice(Math.floor(g / 2) - 5, Math.floor(g / 2) + 5).join("") + f.slice( - 10).join(""))
}
var u = void 0,
l = "" + String.fromCharCode(103) + String.fromCharCode(116) + String.fromCharCode(107);
u = null !== i ? i: (i = window[l] || "") || "";
for (var d = u.split("."), m = Number(d[0]) || 0, s = Number(d[1]) || 0, S = [], c = 0, v = 0; v < r.length; v++) {
var A = r.charCodeAt(v);
128 > A ? S[c++] = A: (2048 > A ? S[c++] = A >> 6 | 192 : (55296 === (64512 & A) && v + 1 < r.length && 56320 === (64512 & r.charCodeAt(v + 1)) ? (A = 65536 + ((1023 & A) << 10) + (1023 & r.charCodeAt(++v)), S[c++] = A >> 18 | 240, S[c++] = A >> 12 & 63 | 128) : S[c++] = A >> 12 | 224, S[c++] = A >> 6 & 63 | 128), S[c++] = 63 & A | 128)
}
for (var p = m,
F = "" + String.fromCharCode(43) + String.fromCharCode(45) + String.fromCharCode(97) + ("" + String.fromCharCode(94) + String.fromCharCode(43) + String.fromCharCode(54)), D = "" + String.fromCharCode(43) + String.fromCharCode(45) + String.fromCharCode(51) + ("" + String.fromCharCode(94) + String.fromCharCode(43) + String.fromCharCode(98)) + ("" + String.fromCharCode(43) + String.fromCharCode(45) + String.fromCharCode(102)), b = 0; b < S.length; b++) p += S[b],
p = n(p, F);
return p = n(p, D),
p ^= s,
0 > p && (p = (2147483647 & p) + 2147483648),
p %= 1e6,
p.toString() + "." + (p ^ m)
}
""")
def getSign(self, text):
return self.ctx.call("e", text)
sign和token准备好,接下来的翻译请求流程就比较明了了
js = Py4Js_baidu()
token_cookies = [
{'token': '4a634f2d5f893a75d6f18642e0e4b224','cookie':'REALTIME_TRANS_SWITCH=1; FANYI_WORD_SWITCH=1; ... 8333265289e6d1c7e_1589608892_js'}
]
token_cookie = random.choice(token_cookies)
post_data = {
'from': 'en',
'to': 'zh',
'transtype': 'translang',
'simple_means_flag': '3',
'domain': 'common'
'query': 'machine translation',
'sign': js.getSign(src),
'token':token_cookie['token']
}
post_headers = {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "zh-CN,zh;q=0.9,en;q=0.8",
"cookie": token_cookie['cookie'],
"user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.162 Safari/537.36",
"origin": "https://fanyi.baidu.com",
"referer": "https://fanyi.baidu.com /"
}
# proxies为代理配置,见 https://www.abuyun.com/http-proxy/pro-manual-python.html
response = requests.post(post_url, data=post_data, proxies=proxies, headers=post_headers)
if response.status_code == 200:
dict_data = json.loads(response.content.decode())
tgt = dict_data['trans_result']['data'][0]['dst']
注意
- 百度翻译接口返回数据中可能包含了当前关键词和例句,并且可以通过再次翻译关键词挖掘更多的例句,不过也不能太贪心,毕竟百度不会把所有的语料都放在例句中。