Browse Source

Merge branch 'master' of https://github.com/asciimoo/searx into code_results

Conflicts:
	searx/engines/searchcode_code.py
	searx/engines/searchcode_doc.py
	searx/static/oscar/js/searx.min.js
	searx/templates/oscar/result_templates/default.html
	searx/templates/oscar/result_templates/images.html
	searx/templates/oscar/result_templates/map.html
	searx/templates/oscar/result_templates/torrent.html
	searx/templates/oscar/result_templates/videos.html
Thomas Pointhuber 10 years ago
parent
commit
400b54191c
100 changed files with 611 additions and 121 deletions
  1. 1
    1
      .travis.yml
  2. 3
    0
      AUTHORS.rst
  3. 27
    0
      CHANGELOG.rst
  4. 9
    9
      Makefile
  5. 16
    18
      README.rst
  6. 11
    1
      searx/__init__.py
  7. 2
    2
      searx/engines/500px.py
  8. 7
    4
      searx/engines/__init__.py
  9. 11
    4
      searx/engines/dailymotion.py
  10. 61
    0
      searx/engines/deezer.py
  11. 70
    0
      searx/engines/digg.py
  12. 1
    5
      searx/engines/duckduckgo_definitions.py
  13. 95
    0
      searx/engines/flickr-noapi.py
  14. 62
    29
      searx/engines/flickr.py
  15. 3
    2
      searx/engines/kickass.py
  16. 8
    4
      searx/engines/searchcode_doc.py
  17. 12
    2
      searx/engines/soundcloud.py
  18. 4
    1
      searx/engines/startpage.py
  19. 78
    0
      searx/engines/subtitleseeker.py
  20. 19
    8
      searx/engines/twitter.py
  21. 14
    12
      searx/engines/vimeo.py
  22. 14
    0
      searx/engines/wikidata.py
  23. 11
    2
      searx/engines/youtube.py
  24. 5
    3
      searx/https_rewrite.py
  25. 16
    8
      searx/search.py
  26. 49
    2
      searx/settings.yml
  27. 0
    2
      searx/static/oscar/js/searx.min.js
  28. 0
    0
      searx/static/themes/courgette/css/style.css
  29. 0
    0
      searx/static/themes/courgette/img/bg-body-index.jpg
  30. 0
    0
      searx/static/themes/courgette/img/favicon.png
  31. 0
    0
      searx/static/themes/courgette/img/github_ribbon.png
  32. 0
    0
      searx/static/themes/courgette/img/icons/icon_dailymotion.ico
  33. 0
    0
      searx/static/themes/courgette/img/icons/icon_deviantart.ico
  34. 0
    0
      searx/static/themes/courgette/img/icons/icon_github.ico
  35. 0
    0
      searx/static/themes/courgette/img/icons/icon_kickass.ico
  36. 0
    0
      searx/static/themes/courgette/img/icons/icon_soundcloud.ico
  37. 0
    0
      searx/static/themes/courgette/img/icons/icon_stackoverflow.ico
  38. 0
    0
      searx/static/themes/courgette/img/icons/icon_twitter.ico
  39. 0
    0
      searx/static/themes/courgette/img/icons/icon_vimeo.ico
  40. 0
    0
      searx/static/themes/courgette/img/icons/icon_wikipedia.ico
  41. 0
    0
      searx/static/themes/courgette/img/icons/icon_youtube.ico
  42. 0
    0
      searx/static/themes/courgette/img/preference-icon.png
  43. 0
    0
      searx/static/themes/courgette/img/search-icon.png
  44. 0
    0
      searx/static/themes/courgette/img/searx-mobile.png
  45. 0
    0
      searx/static/themes/courgette/img/searx.png
  46. 0
    0
      searx/static/themes/courgette/img/searx_logo.svg
  47. 0
    0
      searx/static/themes/courgette/js/mootools-autocompleter-1.1.2-min.js
  48. 0
    0
      searx/static/themes/courgette/js/mootools-core-1.4.5-min.js
  49. 0
    0
      searx/static/themes/courgette/js/searx.js
  50. 0
    0
      searx/static/themes/default/css/style.css
  51. 0
    0
      searx/static/themes/default/img/favicon.png
  52. 0
    0
      searx/static/themes/default/img/github_ribbon.png
  53. 0
    0
      searx/static/themes/default/img/icons/icon_dailymotion.ico
  54. 0
    0
      searx/static/themes/default/img/icons/icon_deviantart.ico
  55. 0
    0
      searx/static/themes/default/img/icons/icon_github.ico
  56. 0
    0
      searx/static/themes/default/img/icons/icon_kickass.ico
  57. 0
    0
      searx/static/themes/default/img/icons/icon_soundcloud.ico
  58. 0
    0
      searx/static/themes/default/img/icons/icon_stackoverflow.ico
  59. 0
    0
      searx/static/themes/default/img/icons/icon_twitter.ico
  60. 0
    0
      searx/static/themes/default/img/icons/icon_vimeo.ico
  61. 0
    0
      searx/static/themes/default/img/icons/icon_wikipedia.ico
  62. 0
    0
      searx/static/themes/default/img/icons/icon_youtube.ico
  63. 0
    0
      searx/static/themes/default/img/preference-icon.png
  64. 0
    0
      searx/static/themes/default/img/search-icon.png
  65. 0
    0
      searx/static/themes/default/img/searx.png
  66. 0
    0
      searx/static/themes/default/img/searx_logo.svg
  67. 0
    0
      searx/static/themes/default/js/mootools-autocompleter-1.1.2-min.js
  68. 0
    0
      searx/static/themes/default/js/mootools-core-1.4.5-min.js
  69. 0
    0
      searx/static/themes/default/js/searx.js
  70. 0
    0
      searx/static/themes/default/less/autocompleter.less
  71. 0
    0
      searx/static/themes/default/less/code.less
  72. 0
    0
      searx/static/themes/default/less/definitions.less
  73. 0
    0
      searx/static/themes/default/less/mixins.less
  74. 0
    0
      searx/static/themes/default/less/search.less
  75. 0
    0
      searx/static/themes/default/less/style.less
  76. 0
    0
      searx/static/themes/oscar/.gitignore
  77. 2
    2
      searx/static/themes/oscar/README.rst
  78. 0
    0
      searx/static/themes/oscar/css/bootstrap.min.css
  79. 0
    0
      searx/static/themes/oscar/css/leaflet.min.css
  80. 0
    0
      searx/static/themes/oscar/css/oscar.min.css
  81. 0
    0
      searx/static/themes/oscar/fonts/glyphicons-halflings-regular.eot
  82. 0
    0
      searx/static/themes/oscar/fonts/glyphicons-halflings-regular.svg
  83. 0
    0
      searx/static/themes/oscar/fonts/glyphicons-halflings-regular.ttf
  84. 0
    0
      searx/static/themes/oscar/fonts/glyphicons-halflings-regular.woff
  85. 0
    0
      searx/static/themes/oscar/gruntfile.js
  86. 0
    0
      searx/static/themes/oscar/img/favicon.png
  87. 0
    0
      searx/static/themes/oscar/img/icons/README.md
  88. 0
    0
      searx/static/themes/oscar/img/icons/amazon.png
  89. 0
    0
      searx/static/themes/oscar/img/icons/dailymotion.png
  90. 0
    0
      searx/static/themes/oscar/img/icons/deviantart.png
  91. 0
    0
      searx/static/themes/oscar/img/icons/facebook.png
  92. 0
    0
      searx/static/themes/oscar/img/icons/flickr.png
  93. 0
    0
      searx/static/themes/oscar/img/icons/github.png
  94. 0
    0
      searx/static/themes/oscar/img/icons/kickass.png
  95. BIN
      searx/static/themes/oscar/img/icons/openstreetmap.png
  96. BIN
      searx/static/themes/oscar/img/icons/photon.png
  97. BIN
      searx/static/themes/oscar/img/icons/searchcode code.png
  98. BIN
      searx/static/themes/oscar/img/icons/searchcode doc.png
  99. 0
    0
      searx/static/themes/oscar/img/icons/soundcloud.png
  100. 0
    0
      searx/static/themes/oscar/img/icons/stackoverflow.png

+ 1
- 1
.travis.yml View File

5
   - "export DISPLAY=:99.0"
5
   - "export DISPLAY=:99.0"
6
   - "sh -e /etc/init.d/xvfb start"
6
   - "sh -e /etc/init.d/xvfb start"
7
   - npm install -g less grunt-cli
7
   - npm install -g less grunt-cli
8
-  - ( cd searx/static/oscar;npm install )
8
+  - ( cd searx/static/themes/oscar;npm install )
9
 install:
9
 install:
10
   - "make"
10
   - "make"
11
   - pip install coveralls
11
   - pip install coveralls

+ 3
- 0
AUTHORS.rst View File

29
 - @kernc
29
 - @kernc
30
 - @Cqoicebordel
30
 - @Cqoicebordel
31
 - @Reventl0v
31
 - @Reventl0v
32
+- Caner Başaran
33
+- Benjamin Sonntag
34
+- @opi

+ 27
- 0
CHANGELOG.rst View File

1
+0.6.0 - 2014.12.25
2
+==================
3
+
4
+- Changelog added
5
+- New engines
6
+
7
+  - Flickr (api)
8
+  - Subtitleseeker
9
+  - photon
10
+  - 500px
11
+  - Searchcode
12
+  - Searchcode doc
13
+  - Kickass torrent
14
+- Precise search request timeout handling
15
+- Better favicon support
16
+- Stricter config parsing
17
+- Translation updates
18
+- Multiple ui fixes
19
+- Flickr (noapi) engine fix
20
+- Pep8 fixes
21
+
22
+
23
+News
24
+~~~~
25
+
26
+Health status of searx instances and engines: http://stats.searx.oe5tpo.com
27
+(source: https://github.com/pointhi/searx_stats)

+ 9
- 9
Makefile View File

18
 	virtualenv -p python$(version) --no-site-packages .
18
 	virtualenv -p python$(version) --no-site-packages .
19
 	@touch $@
19
 	@touch $@
20
 
20
 
21
-tests: .installed.cfg
22
-	@bin/test
23
-	@grunt test --gruntfile searx/static/oscar/gruntfile.js
24
-
25
 robot: .installed.cfg
21
 robot: .installed.cfg
26
 	@bin/robot
22
 	@bin/robot
27
 
23
 
29
 	@bin/flake8 setup.py
25
 	@bin/flake8 setup.py
30
 	@bin/flake8 ./searx/
26
 	@bin/flake8 ./searx/
31
 
27
 
28
+tests: .installed.cfg flake8
29
+	@bin/test
30
+	@grunt test --gruntfile searx/static/themes/oscar/gruntfile.js
31
+
32
 coverage: .installed.cfg
32
 coverage: .installed.cfg
33
 	@bin/coverage run bin/test
33
 	@bin/coverage run bin/test
34
 	@bin/coverage report
34
 	@bin/coverage report
45
 	bin/buildout -c minimal.cfg $(options)
45
 	bin/buildout -c minimal.cfg $(options)
46
 
46
 
47
 styles:
47
 styles:
48
-	@lessc -x searx/static/default/less/style.less > searx/static/default/css/style.css
49
-	@lessc -x searx/static/oscar/less/bootstrap/bootstrap.less > searx/static/oscar/css/bootstrap.min.css
50
-	@lessc -x searx/static/oscar/less/oscar/oscar.less > searx/static/oscar/css/oscar.min.css
48
+	@lessc -x searx/static/themes/default/less/style.less > searx/static/themes/default/css/style.css
49
+	@lessc -x searx/static/themes/oscar/less/bootstrap/bootstrap.less > searx/static/themes/oscar/css/bootstrap.min.css
50
+	@lessc -x searx/static/themes/oscar/less/oscar/oscar.less > searx/static/themes/oscar/css/oscar.min.css
51
 
51
 
52
 grunt:
52
 grunt:
53
-	@grunt --gruntfile searx/static/oscar/gruntfile.js
53
+	@grunt --gruntfile searx/static/themes/oscar/gruntfile.js
54
 
54
 
55
 locales:
55
 locales:
56
 	@pybabel compile -d searx/translations
56
 	@pybabel compile -d searx/translations
57
 
57
 
58
 clean:
58
 clean:
59
 	@rm -rf .installed.cfg .mr.developer.cfg bin parts develop-eggs \
59
 	@rm -rf .installed.cfg .mr.developer.cfg bin parts develop-eggs \
60
-		searx.egg-info lib include .coverage coverage searx/static/default/css/*.css
60
+		searx.egg-info lib include .coverage coverage searx/static/themes/default/css/*.css
61
 
61
 
62
 .PHONY: all tests robot flake8 coverage production minimal styles locales clean
62
 .PHONY: all tests robot flake8 coverage production minimal styles locales clean

+ 16
- 18
README.rst View File

14
 Features
14
 Features
15
 ~~~~~~~~
15
 ~~~~~~~~
16
 
16
 
17
--  Tracking free
18
--  Supports multiple output formats
19
-    -  json ``curl https://searx.me/?format=json&q=[query]``
20
-    -  csv ``curl https://searx.me/?format=csv&q=[query]``
21
-    -  opensearch/rss ``curl https://searx.me/?format=rss&q=[query]``
22
--  Opensearch support (you can set as default search engine)
23
--  Configurable search engines/categories
24
--  Different search languages
25
--  Duckduckgo like !bang functionality with engine shortcuts
26
--  Parallel queries - relatively fast
17
+- Tracking free
18
+- Supports multiple output formats
19
+
20
+  - json ``curl https://searx.me/?format=json&q=[query]``
21
+  - csv ``curl https://searx.me/?format=csv&q=[query]``
22
+  - opensearch/rss ``curl https://searx.me/?format=rss&q=[query]``
23
+- Opensearch support (you can set as default search engine)
24
+- Configurable search engines/categories
25
+- Different search languages
26
+- Duckduckgo like !bang functionality with engine shortcuts
27
+- Parallel queries - relatively fast
27
 
28
 
28
 Installation
29
 Installation
29
 ~~~~~~~~~~~~
30
 ~~~~~~~~~~~~
131
 TODO
132
 TODO
132
 ~~~~
133
 ~~~~
133
 
134
 
134
--  Moar engines
135
--  Better ui
136
--  Browser integration
137
--  Documentation
138
--  Fix ``flake8`` errors, ``make flake8`` will be merged into
139
-   ``make tests`` when it does not fail anymore
140
--  Tests
141
--  When we have more tests, we can integrate Travis-CI
135
+- Moar engines
136
+- Better ui
137
+- Browser integration
138
+- Documentation
139
+- Tests
142
 
140
 
143
 Bugs
141
 Bugs
144
 ~~~~
142
 ~~~~

+ 11
- 1
searx/__init__.py View File

15
 (C) 2013- by Adam Tauber, <asciimoo@gmail.com>
15
 (C) 2013- by Adam Tauber, <asciimoo@gmail.com>
16
 '''
16
 '''
17
 
17
 
18
+import logging
18
 from os import environ
19
 from os import environ
19
 from os.path import realpath, dirname, join, abspath
20
 from os.path import realpath, dirname, join, abspath
20
-from searx.https_rewrite import load_https_rules
21
 try:
21
 try:
22
     from yaml import load
22
     from yaml import load
23
 except:
23
 except:
45
 with open(settings_path) as settings_yaml:
45
 with open(settings_path) as settings_yaml:
46
     settings = load(settings_yaml)
46
     settings = load(settings_yaml)
47
 
47
 
48
+if settings.get('server', {}).get('debug'):
49
+    logging.basicConfig(level=logging.DEBUG)
50
+else:
51
+    logging.basicConfig(level=logging.WARNING)
52
+
53
+logger = logging.getLogger('searx')
54
+
48
 # load https rules only if https rewrite is enabled
55
 # load https rules only if https rewrite is enabled
49
 if settings.get('server', {}).get('https_rewrite'):
56
 if settings.get('server', {}).get('https_rewrite'):
50
     # loade https rules
57
     # loade https rules
58
+    from searx.https_rewrite import load_https_rules
51
     load_https_rules(https_rewrite_path)
59
     load_https_rules(https_rewrite_path)
60
+
61
+logger.info('Initialisation done')

+ 2
- 2
searx/engines/500px.py View File

35
 # get response from search-request
35
 # get response from search-request
36
 def response(resp):
36
 def response(resp):
37
     results = []
37
     results = []
38
-    
38
+
39
     dom = html.fromstring(resp.text)
39
     dom = html.fromstring(resp.text)
40
-    
40
+
41
     # parse results
41
     # parse results
42
     for result in dom.xpath('//div[@class="photo"]'):
42
     for result in dom.xpath('//div[@class="photo"]'):
43
         link = result.xpath('.//a')[0]
43
         link = result.xpath('.//a')[0]

+ 7
- 4
searx/engines/__init__.py View File

22
 from flask.ext.babel import gettext
22
 from flask.ext.babel import gettext
23
 from operator import itemgetter
23
 from operator import itemgetter
24
 from searx import settings
24
 from searx import settings
25
+from searx import logger
26
+
27
+
28
+logger = logger.getChild('engines')
25
 
29
 
26
 engine_dir = dirname(realpath(__file__))
30
 engine_dir = dirname(realpath(__file__))
27
 
31
 
81
         if engine_attr.startswith('_'):
85
         if engine_attr.startswith('_'):
82
             continue
86
             continue
83
         if getattr(engine, engine_attr) is None:
87
         if getattr(engine, engine_attr) is None:
84
-            print('[E] Engine config error: Missing attribute "{0}.{1}"'\
88
+            logger.error('Missing engine config attribute: "{0}.{1}"'
85
                   .format(engine.name, engine_attr))
89
                   .format(engine.name, engine_attr))
86
             sys.exit(1)
90
             sys.exit(1)
87
 
91
 
100
         categories['general'].append(engine)
104
         categories['general'].append(engine)
101
 
105
 
102
     if engine.shortcut:
106
     if engine.shortcut:
103
-        # TODO check duplications
104
         if engine.shortcut in engine_shortcuts:
107
         if engine.shortcut in engine_shortcuts:
105
-            print('[E] Engine config error: ambigious shortcut: {0}'\
108
+            logger.error('Engine config error: ambigious shortcut: {0}'
106
                   .format(engine.shortcut))
109
                   .format(engine.shortcut))
107
             sys.exit(1)
110
             sys.exit(1)
108
         engine_shortcuts[engine.shortcut] = engine.name
111
         engine_shortcuts[engine.shortcut] = engine.name
199
 
202
 
200
 
203
 
201
 if 'engines' not in settings or not settings['engines']:
204
 if 'engines' not in settings or not settings['engines']:
202
-    print '[E] Error no engines found. Edit your settings.yml'
205
+    logger.error('No engines found. Edit your settings.yml')
203
     exit(2)
206
     exit(2)
204
 
207
 
205
 for engine_data in settings['engines']:
208
 for engine_data in settings['engines']:

+ 11
- 4
searx/engines/dailymotion.py View File

6
 # @using-api   yes
6
 # @using-api   yes
7
 # @results     JSON
7
 # @results     JSON
8
 # @stable      yes
8
 # @stable      yes
9
-# @parse       url, title, thumbnail
9
+# @parse       url, title, thumbnail, publishedDate, embedded
10
 #
10
 #
11
 # @todo        set content-parameter with correct data
11
 # @todo        set content-parameter with correct data
12
 
12
 
13
 from urllib import urlencode
13
 from urllib import urlencode
14
 from json import loads
14
 from json import loads
15
+from cgi import escape
16
+from datetime import datetime
15
 
17
 
16
 # engine dependent config
18
 # engine dependent config
17
 categories = ['videos']
19
 categories = ['videos']
20
 
22
 
21
 # search-url
23
 # search-url
22
 # see http://www.dailymotion.com/doc/api/obj-video.html
24
 # see http://www.dailymotion.com/doc/api/obj-video.html
23
-search_url = 'https://api.dailymotion.com/videos?fields=title,description,duration,url,thumbnail_360_url&sort=relevance&limit=5&page={pageno}&{query}'  # noqa
25
+search_url = 'https://api.dailymotion.com/videos?fields=created_time,title,description,duration,url,thumbnail_360_url,id&sort=relevance&limit=5&page={pageno}&{query}'  # noqa
26
+embedded_url = '<iframe frameborder="0" width="540" height="304" ' +\
27
+    'data-src="//www.dailymotion.com/embed/video/{videoid}" allowfullscreen></iframe>'
24
 
28
 
25
 
29
 
26
 # do search-request
30
 # do search-request
51
     for res in search_res['list']:
55
     for res in search_res['list']:
52
         title = res['title']
56
         title = res['title']
53
         url = res['url']
57
         url = res['url']
54
-        #content = res['description']
55
-        content = ''
58
+        content = escape(res['description'])
56
         thumbnail = res['thumbnail_360_url']
59
         thumbnail = res['thumbnail_360_url']
60
+        publishedDate = datetime.fromtimestamp(res['created_time'], None)
61
+        embedded = embedded_url.format(videoid=res['id'])
57
 
62
 
58
         results.append({'template': 'videos.html',
63
         results.append({'template': 'videos.html',
59
                         'url': url,
64
                         'url': url,
60
                         'title': title,
65
                         'title': title,
61
                         'content': content,
66
                         'content': content,
67
+                        'publishedDate': publishedDate,
68
+                        'embedded': embedded,
62
                         'thumbnail': thumbnail})
69
                         'thumbnail': thumbnail})
63
 
70
 
64
     # return results
71
     # return results

+ 61
- 0
searx/engines/deezer.py View File

1
+## Deezer (Music)
2
+#
3
+# @website     https://deezer.com
4
+# @provide-api yes (http://developers.deezer.com/api/)
5
+#
6
+# @using-api   yes
7
+# @results     JSON
8
+# @stable      yes
9
+# @parse       url, title, content, embedded
10
+
11
+from json import loads
12
+from urllib import urlencode
13
+
14
+# engine dependent config
15
+categories = ['music']
16
+paging = True
17
+
18
+# search-url
19
+url = 'http://api.deezer.com/'
20
+search_url = url + 'search?{query}&index={offset}'
21
+
22
+embedded_url = '<iframe scrolling="no" frameborder="0" allowTransparency="true" ' +\
23
+    'data-src="http://www.deezer.com/plugins/player?type=tracks&id={audioid}" ' +\
24
+    'width="540" height="80"></iframe>'
25
+
26
+
27
+# do search-request
28
+def request(query, params):
29
+    offset = (params['pageno'] - 1) * 25
30
+
31
+    params['url'] = search_url.format(query=urlencode({'q': query}),
32
+                                      offset=offset)
33
+
34
+    return params
35
+
36
+
37
+# get response from search-request
38
+def response(resp):
39
+    results = []
40
+
41
+    search_res = loads(resp.text)
42
+
43
+    # parse results
44
+    for result in search_res.get('data', []):
45
+        if result['type'] == 'track':
46
+            title = result['title']
47
+            url = result['link']
48
+            content = result['artist']['name'] +\
49
+                " &bull; " +\
50
+                result['album']['title'] +\
51
+                " &bull; " + result['title']
52
+            embedded = embedded_url.format(audioid=result['id'])
53
+
54
+            # append result
55
+            results.append({'url': url,
56
+                            'title': title,
57
+                            'embedded': embedded,
58
+                            'content': content})
59
+
60
+    # return results
61
+    return results

+ 70
- 0
searx/engines/digg.py View File

1
+## Digg (News, Social media)
2
+#
3
+# @website     https://digg.com/
4
+# @provide-api no
5
+#
6
+# @using-api   no
7
+# @results     HTML (using search portal)
8
+# @stable      no (HTML can change)
9
+# @parse       url, title, content, publishedDate, thumbnail
10
+
11
+from urllib import quote_plus
12
+from json import loads
13
+from lxml import html
14
+from cgi import escape
15
+from dateutil import parser
16
+
17
+# engine dependent config
18
+categories = ['news', 'social media']
19
+paging = True
20
+
21
+# search-url
22
+base_url = 'https://digg.com/'
23
+search_url = base_url+'api/search/{query}.json?position={position}&format=html'
24
+
25
+# specific xpath variables
26
+results_xpath = '//article'
27
+link_xpath = './/small[@class="time"]//a'
28
+title_xpath = './/h2//a//text()'
29
+content_xpath = './/p//text()'
30
+pubdate_xpath = './/time'
31
+
32
+
33
+# do search-request
34
+def request(query, params):
35
+    offset = (params['pageno'] - 1) * 10
36
+    params['url'] = search_url.format(position=offset,
37
+                                      query=quote_plus(query))
38
+    return params
39
+
40
+
41
+# get response from search-request
42
+def response(resp):
43
+    results = []
44
+
45
+    search_result = loads(resp.text)
46
+
47
+    if search_result['html'] == '':
48
+        return results
49
+
50
+    dom = html.fromstring(search_result['html'])
51
+
52
+    # parse results
53
+    for result in dom.xpath(results_xpath):
54
+        url = result.attrib.get('data-contenturl')
55
+        thumbnail = result.xpath('.//img')[0].attrib.get('src')
56
+        title = ''.join(result.xpath(title_xpath))
57
+        content = escape(''.join(result.xpath(content_xpath)))
58
+        pubdate = result.xpath(pubdate_xpath)[0].attrib.get('datetime')
59
+        publishedDate = parser.parse(pubdate)
60
+
61
+        # append result
62
+        results.append({'url': url,
63
+                        'title': title,
64
+                        'content': content,
65
+                        'template': 'videos.html',
66
+                        'publishedDate': publishedDate,
67
+                        'thumbnail': thumbnail})
68
+
69
+    # return results
70
+    return results

+ 1
- 5
searx/engines/duckduckgo_definitions.py View File

1
 import json
1
 import json
2
 from urllib import urlencode
2
 from urllib import urlencode
3
 from lxml import html
3
 from lxml import html
4
+from searx.utils import html_to_text
4
 from searx.engines.xpath import extract_text
5
 from searx.engines.xpath import extract_text
5
 
6
 
6
 url = 'https://api.duckduckgo.com/'\
7
 url = 'https://api.duckduckgo.com/'\
17
         return text
18
         return text
18
 
19
 
19
 
20
 
20
-def html_to_text(htmlFragment):
21
-    dom = html.fromstring(htmlFragment)
22
-    return extract_text(dom)
23
-
24
-
25
 def request(query, params):
21
 def request(query, params):
26
     # TODO add kl={locale}
22
     # TODO add kl={locale}
27
     params['url'] = url.format(query=urlencode({'q': query}))
23
     params['url'] = url.format(query=urlencode({'q': query}))

+ 95
- 0
searx/engines/flickr-noapi.py View File

1
+#!/usr/bin/env python
2
+
3
+#  Flickr (Images)
4
+#
5
+# @website     https://www.flickr.com
6
+# @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)
7
+#
8
+# @using-api   no
9
+# @results     HTML
10
+# @stable      no
11
+# @parse       url, title, thumbnail, img_src
12
+
13
+from urllib import urlencode
14
+from json import loads
15
+import re
16
+
17
+categories = ['images']
18
+
19
+url = 'https://secure.flickr.com/'
20
+search_url = url+'search/?{query}&page={page}'
21
+photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
22
+regex = re.compile(r"\"search-photos-models\",\"photos\":(.*}),\"totalItems\":", re.DOTALL)
23
+image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')
24
+
25
+paging = True
26
+
27
+
28
+def build_flickr_url(user_id, photo_id):
29
+    return photo_url.format(userid=user_id, photoid=photo_id)
30
+
31
+
32
+def request(query, params):
33
+    params['url'] = search_url.format(query=urlencode({'text': query}),
34
+                                      page=params['pageno'])
35
+    return params
36
+
37
+
38
+def response(resp):
39
+    results = []
40
+
41
+    matches = regex.search(resp.text)
42
+
43
+    if matches is None:
44
+        return results
45
+
46
+    match = matches.group(1)
47
+    search_results = loads(match)
48
+
49
+    if '_data' not in search_results:
50
+        return []
51
+
52
+    photos = search_results['_data']
53
+
54
+    for photo in photos:
55
+
56
+        # In paged configuration, the first pages' photos
57
+        # are represented by a None object
58
+        if photo is None:
59
+            continue
60
+
61
+        img_src = None
62
+        # From the biggest to the lowest format
63
+        for image_size in image_sizes:
64
+            if image_size in photo['sizes']:
65
+                img_src = photo['sizes'][image_size]['displayUrl']
66
+                break
67
+
68
+        if not img_src:
69
+            continue
70
+
71
+        if 'id' not in photo['owner']:
72
+            continue
73
+
74
+        url = build_flickr_url(photo['owner']['id'], photo['id'])
75
+
76
+        title = photo['title']
77
+
78
+        content = '<span class="photo-author">' +\
79
+                  photo['owner']['username'] +\
80
+                  '</span><br />'
81
+
82
+        if 'description' in photo:
83
+            content = content +\
84
+                '<span class="description">' +\
85
+                photo['description'] +\
86
+                '</span>'
87
+
88
+        # append result
89
+        results.append({'url': url,
90
+                        'title': title,
91
+                        'img_src': img_src,
92
+                        'content': content,
93
+                        'template': 'images.html'})
94
+
95
+    return results

+ 62
- 29
searx/engines/flickr.py View File

1
 #!/usr/bin/env python
1
 #!/usr/bin/env python
2
 
2
 
3
+## Flickr (Images)
4
+#
5
+# @website     https://www.flickr.com
6
+# @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)
7
+#
8
+# @using-api   yes
9
+# @results     JSON
10
+# @stable      yes
11
+# @parse       url, title, thumbnail, img_src
12
+#More info on api-key : https://www.flickr.com/services/apps/create/
13
+
3
 from urllib import urlencode
14
 from urllib import urlencode
4
-#from json import loads
5
-from urlparse import urljoin
6
-from lxml import html
7
-from time import time
15
+from json import loads
8
 
16
 
9
 categories = ['images']
17
 categories = ['images']
10
 
18
 
11
-url = 'https://secure.flickr.com/'
12
-search_url = url+'search/?{query}&page={page}'
13
-results_xpath = '//div[@class="view display-item-tile"]/figure/div'
19
+nb_per_page = 15
20
+paging = True
21
+api_key = None
22
+
23
+
24
+url = 'https://api.flickr.com/services/rest/?method=flickr.photos.search' +\
25
+      '&api_key={api_key}&{text}&sort=relevance' +\
26
+      '&extras=description%2C+owner_name%2C+url_o%2C+url_z' +\
27
+      '&per_page={nb_per_page}&format=json&nojsoncallback=1&page={page}'
28
+photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
14
 
29
 
15
 paging = True
30
 paging = True
16
 
31
 
17
 
32
 
33
+def build_flickr_url(user_id, photo_id):
34
+    return photo_url.format(userid=user_id, photoid=photo_id)
35
+
36
+
18
 def request(query, params):
37
 def request(query, params):
19
-    params['url'] = search_url.format(query=urlencode({'text': query}),
20
-                                      page=params['pageno'])
21
-    time_string = str(int(time())-3)
22
-    params['cookies']['BX'] = '3oqjr6d9nmpgl&b=3&s=dh'
23
-    params['cookies']['xb'] = '421409'
24
-    params['cookies']['localization'] = 'en-us'
25
-    params['cookies']['flrbp'] = time_string +\
26
-        '-3a8cdb85a427a33efda421fbda347b2eaf765a54'
27
-    params['cookies']['flrbs'] = time_string +\
28
-        '-ed142ae8765ee62c9ec92a9513665e0ee1ba6776'
29
-    params['cookies']['flrb'] = '9'
38
+    params['url'] = url.format(text=urlencode({'text': query}),
39
+                               api_key=api_key,
40
+                               nb_per_page=nb_per_page,
41
+                               page=params['pageno'])
30
     return params
42
     return params
31
 
43
 
32
 
44
 
33
 def response(resp):
45
 def response(resp):
34
     results = []
46
     results = []
35
-    dom = html.fromstring(resp.text)
36
-    for result in dom.xpath(results_xpath):
37
-        img = result.xpath('.//img')
38
 
47
 
39
-        if not img:
40
-            continue
48
+    search_results = loads(resp.text)
41
 
49
 
42
-        img = img[0]
43
-        img_src = 'https:'+img.attrib.get('src')
50
+    # return empty array if there are no results
51
+    if not 'photos' in search_results:
52
+        return []
44
 
53
 
45
-        if not img_src:
54
+    if not 'photo' in search_results['photos']:
55
+        return []
56
+
57
+    photos = search_results['photos']['photo']
58
+
59
+    # parse results
60
+    for photo in photos:
61
+        if 'url_o' in photo:
62
+            img_src = photo['url_o']
63
+        elif 'url_z' in photo:
64
+            img_src = photo['url_z']
65
+        else:
46
             continue
66
             continue
47
 
67
 
48
-        href = urljoin(url, result.xpath('.//a')[0].attrib.get('href'))
49
-        title = img.attrib.get('alt', '')
50
-        results.append({'url': href,
68
+        url = build_flickr_url(photo['owner'], photo['id'])
69
+
70
+        title = photo['title']
71
+
72
+        content = '<span class="photo-author">' +\
73
+                  photo['ownername'] +\
74
+                  '</span><br />' +\
75
+                  '<span class="description">' +\
76
+                  photo['description']['_content'] +\
77
+                  '</span>'
78
+
79
+        # append result
80
+        results.append({'url': url,
51
                         'title': title,
81
                         'title': title,
52
                         'img_src': img_src,
82
                         'img_src': img_src,
83
+                        'content': content,
53
                         'template': 'images.html'})
84
                         'template': 'images.html'})
85
+
86
+    # return results
54
     return results
87
     return results

+ 3
- 2
searx/engines/kickass.py View File

24
 
24
 
25
 # specific xpath variables
25
 # specific xpath variables
26
 magnet_xpath = './/a[@title="Torrent magnet link"]'
26
 magnet_xpath = './/a[@title="Torrent magnet link"]'
27
-#content_xpath = './/font[@class="detDesc"]//text()'
27
+content_xpath = './/span[@class="font11px lightgrey block"]'
28
 
28
 
29
 
29
 
30
 # do search-request
30
 # do search-request
56
         link = result.xpath('.//a[@class="cellMainLink"]')[0]
56
         link = result.xpath('.//a[@class="cellMainLink"]')[0]
57
         href = urljoin(url, link.attrib['href'])
57
         href = urljoin(url, link.attrib['href'])
58
         title = ' '.join(link.xpath('.//text()'))
58
         title = ' '.join(link.xpath('.//text()'))
59
-        content = escape(html.tostring(result.xpath('.//span[@class="font11px lightgrey block"]')[0], method="text"))
59
+        content = escape(html.tostring(result.xpath(content_xpath)[0],
60
+                                       method="text"))
60
         seed = result.xpath('.//td[contains(@class, "green")]/text()')[0]
61
         seed = result.xpath('.//td[contains(@class, "green")]/text()')[0]
61
         leech = result.xpath('.//td[contains(@class, "red")]/text()')[0]
62
         leech = result.xpath('.//td[contains(@class, "red")]/text()')[0]
62
 
63
 

+ 8
- 4
searx/engines/searchcode_doc.py View File

38
     for result in search_results['results']:
38
     for result in search_results['results']:
39
         href = result['url']
39
         href = result['url']
40
         title = "[" + result['type'] + "] " +\
40
         title = "[" + result['type'] + "] " +\
41
-                result['namespace'] + " " + result['name']
42
-        content = '<span class="highlight">[' + result['type'] + "] " +\
43
-                  result['name'] + " " + result['synopsis'] +\
44
-                  "</span><br />" + result['description']
41
+                result['namespace'] +\
42
+                " " + result['name']
43
+        content = '<span class="highlight">[' +\
44
+                  result['type'] + "] " +\
45
+                  result['name'] + " " +\
46
+                  result['synopsis'] +\
47
+                  "</span><br />" +\
48
+                  result['description']
45
 
49
 
46
         # append result
50
         # append result
47
         results.append({'url': href,
51
         results.append({'url': href,

+ 12
- 2
searx/engines/soundcloud.py View File

6
 # @using-api   yes
6
 # @using-api   yes
7
 # @results     JSON
7
 # @results     JSON
8
 # @stable      yes
8
 # @stable      yes
9
-# @parse       url, title, content
9
+# @parse       url, title, content, publishedDate, embedded
10
 
10
 
11
 from json import loads
11
 from json import loads
12
-from urllib import urlencode
12
+from urllib import urlencode, quote_plus
13
+from dateutil import parser
13
 
14
 
14
 # engine dependent config
15
 # engine dependent config
15
 categories = ['music']
16
 categories = ['music']
27
                          '&linked_partitioning=1'\
28
                          '&linked_partitioning=1'\
28
                          '&client_id={client_id}'   # noqa
29
                          '&client_id={client_id}'   # noqa
29
 
30
 
31
+embedded_url = '<iframe width="100%" height="166" ' +\
32
+    'scrolling="no" frameborder="no" ' +\
33
+    'data-src="https://w.soundcloud.com/player/?url={uri}"></iframe>'
34
+
30
 
35
 
31
 # do search-request
36
 # do search-request
32
 def request(query, params):
37
 def request(query, params):
50
         if result['kind'] in ('track', 'playlist'):
55
         if result['kind'] in ('track', 'playlist'):
51
             title = result['title']
56
             title = result['title']
52
             content = result['description']
57
             content = result['description']
58
+            publishedDate = parser.parse(result['last_modified'])
59
+            uri = quote_plus(result['uri'])
60
+            embedded = embedded_url.format(uri=uri)
53
 
61
 
54
             # append result
62
             # append result
55
             results.append({'url': result['permalink_url'],
63
             results.append({'url': result['permalink_url'],
56
                             'title': title,
64
                             'title': title,
65
+                            'publishedDate': publishedDate,
66
+                            'embedded': embedded,
57
                             'content': content})
67
                             'content': content})
58
 
68
 
59
     # return results
69
     # return results

+ 4
- 1
searx/engines/startpage.py View File

66
             continue
66
             continue
67
         link = links[0]
67
         link = links[0]
68
         url = link.attrib.get('href')
68
         url = link.attrib.get('href')
69
-        title = escape(link.text_content())
69
+        try:
70
+            title = escape(link.text_content())
71
+        except UnicodeDecodeError:
72
+            continue
70
 
73
 
71
         # block google-ad url's
74
         # block google-ad url's
72
         if re.match("^http(s|)://www.google.[a-z]+/aclk.*$", url):
75
         if re.match("^http(s|)://www.google.[a-z]+/aclk.*$", url):

+ 78
- 0
searx/engines/subtitleseeker.py View File

1
+## Subtitleseeker (Video)
2
+#
3
+# @website     http://www.subtitleseeker.com
4
+# @provide-api no
5
+#
6
+# @using-api   no
7
+# @results     HTML
8
+# @stable      no (HTML can change)
9
+# @parse       url, title, content
10
+
11
+from cgi import escape
12
+from urllib import quote_plus
13
+from lxml import html
14
+from searx.languages import language_codes
15
+
16
+# engine dependent config
17
+categories = ['videos']
18
+paging = True
19
+language = ""
20
+
21
+# search-url
22
+url = 'http://www.subtitleseeker.com/'
23
+search_url = url+'search/TITLES/{query}&p={pageno}'
24
+
25
+# specific xpath variables
26
+results_xpath = '//div[@class="boxRows"]'
27
+
28
+
29
+# do search-request
30
+def request(query, params):
31
+    params['url'] = search_url.format(query=quote_plus(query),
32
+                                      pageno=params['pageno'])
33
+    return params
34
+
35
+
36
+# get response from search-request
37
+def response(resp):
38
+    results = []
39
+
40
+    dom = html.fromstring(resp.text)
41
+
42
+    search_lang = ""
43
+
44
+    if resp.search_params['language'] != 'all':
45
+        search_lang = [lc[1]
46
+                       for lc in language_codes
47
+                       if lc[0][:2] == resp.search_params['language']][0]
48
+
49
+    # parse results
50
+    for result in dom.xpath(results_xpath):
51
+        link = result.xpath(".//a")[0]
52
+        href = link.attrib.get('href')
53
+
54
+        if language is not "":
55
+            href = href + language + '/'
56
+        elif search_lang:
57
+            href = href + search_lang + '/'
58
+
59
+        title = escape(link.xpath(".//text()")[0])
60
+
61
+        content = result.xpath('.//div[contains(@class,"red")]//text()')[0]
62
+        content = content + " - "
63
+        text = result.xpath('.//div[contains(@class,"grey-web")]')[0]
64
+        content = content + html.tostring(text, method='text')
65
+
66
+        if result.xpath(".//span") != []:
67
+            content = content +\
68
+                " - (" +\
69
+                result.xpath(".//span//text()")[0].strip() +\
70
+                ")"
71
+
72
+        # append result
73
+        results.append({'url': href,
74
+                        'title': title,
75
+                        'content': escape(content)})
76
+
77
+    # return results
78
+    return results

+ 19
- 8
searx/engines/twitter.py View File

1
 ## Twitter (Social media)
1
 ## Twitter (Social media)
2
 #
2
 #
3
-# @website     https://www.bing.com/news
3
+# @website     https://twitter.com/
4
 # @provide-api yes (https://dev.twitter.com/docs/using-search)
4
 # @provide-api yes (https://dev.twitter.com/docs/using-search)
5
 #
5
 #
6
 # @using-api   no
6
 # @using-api   no
14
 from urllib import urlencode
14
 from urllib import urlencode
15
 from lxml import html
15
 from lxml import html
16
 from cgi import escape
16
 from cgi import escape
17
+from datetime import datetime
17
 
18
 
18
 # engine dependent config
19
 # engine dependent config
19
 categories = ['social media']
20
 categories = ['social media']
27
 results_xpath = '//li[@data-item-type="tweet"]'
28
 results_xpath = '//li[@data-item-type="tweet"]'
28
 link_xpath = './/small[@class="time"]//a'
29
 link_xpath = './/small[@class="time"]//a'
29
 title_xpath = './/span[@class="username js-action-profile-name"]//text()'
30
 title_xpath = './/span[@class="username js-action-profile-name"]//text()'
30
-content_xpath = './/p[@class="js-tweet-text tweet-text"]//text()'
31
+content_xpath = './/p[@class="js-tweet-text tweet-text"]'
32
+timestamp_xpath = './/span[contains(@class,"_timestamp")]'
31
 
33
 
32
 
34
 
33
 # do search-request
35
 # do search-request
52
         link = tweet.xpath(link_xpath)[0]
54
         link = tweet.xpath(link_xpath)[0]
53
         url = urljoin(base_url, link.attrib.get('href'))
55
         url = urljoin(base_url, link.attrib.get('href'))
54
         title = ''.join(tweet.xpath(title_xpath))
56
         title = ''.join(tweet.xpath(title_xpath))
55
-        content = escape(''.join(tweet.xpath(content_xpath)))
56
-
57
-        # append result
58
-        results.append({'url': url,
59
-                        'title': title,
60
-                        'content': content})
57
+        content = escape(html.tostring(tweet.xpath(content_xpath)[0], method='text', encoding='UTF-8').decode("utf-8"))
58
+        pubdate = tweet.xpath(timestamp_xpath)
59
+        if len(pubdate) > 0:
60
+            timestamp = float(pubdate[0].attrib.get('data-time'))
61
+            publishedDate = datetime.fromtimestamp(timestamp, None)
62
+            # append result
63
+            results.append({'url': url,
64
+                            'title': title,
65
+                            'content': content,
66
+                            'publishedDate': publishedDate})
67
+        else:
68
+            # append result
69
+            results.append({'url': url,
70
+                            'title': title,
71
+                            'content': content})
61
 
72
 
62
     # return results
73
     # return results
63
     return results
74
     return results

+ 14
- 12
searx/engines/vimeo.py View File

1
-## Vimeo (Videos)
1
+#  Vimeo (Videos)
2
 #
2
 #
3
 # @website     https://vimeo.com/
3
 # @website     https://vimeo.com/
4
 # @provide-api yes (http://developer.vimeo.com/api),
4
 # @provide-api yes (http://developer.vimeo.com/api),
7
 # @using-api   no (TODO, rewrite to api)
7
 # @using-api   no (TODO, rewrite to api)
8
 # @results     HTML (using search portal)
8
 # @results     HTML (using search portal)
9
 # @stable      no (HTML can change)
9
 # @stable      no (HTML can change)
10
-# @parse       url, title, publishedDate,  thumbnail
10
+# @parse       url, title, publishedDate,  thumbnail, embedded
11
 #
11
 #
12
 # @todo        rewrite to api
12
 # @todo        rewrite to api
13
 # @todo        set content-parameter with correct data
13
 # @todo        set content-parameter with correct data
14
 
14
 
15
 from urllib import urlencode
15
 from urllib import urlencode
16
-from HTMLParser import HTMLParser
17
 from lxml import html
16
 from lxml import html
17
+from HTMLParser import HTMLParser
18
 from searx.engines.xpath import extract_text
18
 from searx.engines.xpath import extract_text
19
 from dateutil import parser
19
 from dateutil import parser
20
 
20
 
23
 paging = True
23
 paging = True
24
 
24
 
25
 # search-url
25
 # search-url
26
-base_url = 'https://vimeo.com'
26
+base_url = 'http://vimeo.com'
27
 search_url = base_url + '/search/page:{pageno}?{query}'
27
 search_url = base_url + '/search/page:{pageno}?{query}'
28
 
28
 
29
 # specific xpath variables
29
 # specific xpath variables
30
+results_xpath = '//div[@id="browse_content"]/ol/li'
30
 url_xpath = './a/@href'
31
 url_xpath = './a/@href'
32
+title_xpath = './a/div[@class="data"]/p[@class="title"]'
31
 content_xpath = './a/img/@src'
33
 content_xpath = './a/img/@src'
32
-title_xpath = './a/div[@class="data"]/p[@class="title"]/text()'
33
-results_xpath = '//div[@id="browse_content"]/ol/li'
34
 publishedDate_xpath = './/p[@class="meta"]//attribute::datetime'
34
 publishedDate_xpath = './/p[@class="meta"]//attribute::datetime'
35
 
35
 
36
+embedded_url = '<iframe data-src="//player.vimeo.com/video{videoid}" ' +\
37
+    'width="540" height="304" frameborder="0" ' +\
38
+    'webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>'
39
+
36
 
40
 
37
 # do search-request
41
 # do search-request
38
 def request(query, params):
42
 def request(query, params):
39
     params['url'] = search_url.format(pageno=params['pageno'],
43
     params['url'] = search_url.format(pageno=params['pageno'],
40
                                       query=urlencode({'q': query}))
44
                                       query=urlencode({'q': query}))
41
 
45
 
42
-    # TODO required?
43
-    params['cookies']['__utma'] =\
44
-        '00000000.000#0000000.0000000000.0000000000.0000000000.0'
45
-
46
     return params
46
     return params
47
 
47
 
48
 
48
 
51
     results = []
51
     results = []
52
 
52
 
53
     dom = html.fromstring(resp.text)
53
     dom = html.fromstring(resp.text)
54
-
55
     p = HTMLParser()
54
     p = HTMLParser()
56
 
55
 
57
     # parse results
56
     # parse results
58
     for result in dom.xpath(results_xpath):
57
     for result in dom.xpath(results_xpath):
59
-        url = base_url + result.xpath(url_xpath)[0]
58
+        videoid = result.xpath(url_xpath)[0]
59
+        url = base_url + videoid
60
         title = p.unescape(extract_text(result.xpath(title_xpath)))
60
         title = p.unescape(extract_text(result.xpath(title_xpath)))
61
         thumbnail = extract_text(result.xpath(content_xpath)[0])
61
         thumbnail = extract_text(result.xpath(content_xpath)[0])
62
         publishedDate = parser.parse(extract_text(
62
         publishedDate = parser.parse(extract_text(
63
             result.xpath(publishedDate_xpath)[0]))
63
             result.xpath(publishedDate_xpath)[0]))
64
+        embedded = embedded_url.format(videoid=videoid)
64
 
65
 
65
         # append result
66
         # append result
66
         results.append({'url': url,
67
         results.append({'url': url,
68
                         'content': '',
69
                         'content': '',
69
                         'template': 'videos.html',
70
                         'template': 'videos.html',
70
                         'publishedDate': publishedDate,
71
                         'publishedDate': publishedDate,
72
+                        'embedded': embedded,
71
                         'thumbnail': thumbnail})
73
                         'thumbnail': thumbnail})
72
 
74
 
73
     # return results
75
     # return results

+ 14
- 0
searx/engines/wikidata.py View File

1
 import json
1
 import json
2
 from requests import get
2
 from requests import get
3
 from urllib import urlencode
3
 from urllib import urlencode
4
+import locale
5
+import dateutil.parser
4
 
6
 
5
 result_count = 1
7
 result_count = 1
6
 wikidata_host = 'https://www.wikidata.org'
8
 wikidata_host = 'https://www.wikidata.org'
35
     language = resp.search_params['language'].split('_')[0]
37
     language = resp.search_params['language'].split('_')[0]
36
     if language == 'all':
38
     if language == 'all':
37
         language = 'en'
39
         language = 'en'
40
+
41
+    try:
42
+        locale.setlocale(locale.LC_ALL, str(resp.search_params['language']))
43
+    except:
44
+        try:
45
+            locale.setlocale(locale.LC_ALL, 'en_US')
46
+        except:
47
+            pass
48
+        pass
49
+
38
     url = url_detail.format(query=urlencode({'ids': '|'.join(wikidata_ids),
50
     url = url_detail.format(query=urlencode({'ids': '|'.join(wikidata_ids),
39
                                             'languages': language + '|en'}))
51
                                             'languages': language + '|en'}))
40
 
52
 
164
 
176
 
165
     date_of_birth = get_time(claims, 'P569', None)
177
     date_of_birth = get_time(claims, 'P569', None)
166
     if date_of_birth is not None:
178
     if date_of_birth is not None:
179
+        date_of_birth = dateutil.parser.parse(date_of_birth[8:]).strftime(locale.nl_langinfo(locale.D_FMT))
167
         attributes.append({'label': 'Date of birth', 'value': date_of_birth})
180
         attributes.append({'label': 'Date of birth', 'value': date_of_birth})
168
 
181
 
169
     date_of_death = get_time(claims, 'P570', None)
182
     date_of_death = get_time(claims, 'P570', None)
170
     if date_of_death is not None:
183
     if date_of_death is not None:
184
+        date_of_death = dateutil.parser.parse(date_of_death[8:]).strftime(locale.nl_langinfo(locale.D_FMT))
171
         attributes.append({'label': 'Date of death', 'value': date_of_death})
185
         attributes.append({'label': 'Date of death', 'value': date_of_death})
172
 
186
 
173
     if len(attributes) == 0 and len(urls) == 2 and len(description) == 0:
187
     if len(attributes) == 0 and len(urls) == 2 and len(description) == 0:

+ 11
- 2
searx/engines/youtube.py View File

6
 # @using-api   yes
6
 # @using-api   yes
7
 # @results     JSON
7
 # @results     JSON
8
 # @stable      yes
8
 # @stable      yes
9
-# @parse       url, title, content, publishedDate, thumbnail
9
+# @parse       url, title, content, publishedDate, thumbnail, embedded
10
 
10
 
11
 from json import loads
11
 from json import loads
12
 from urllib import urlencode
12
 from urllib import urlencode
19
 
19
 
20
 # search-url
20
 # search-url
21
 base_url = 'https://gdata.youtube.com/feeds/api/videos'
21
 base_url = 'https://gdata.youtube.com/feeds/api/videos'
22
-search_url = base_url + '?alt=json&{query}&start-index={index}&max-results=5'  # noqa
22
+search_url = base_url + '?alt=json&{query}&start-index={index}&max-results=5'
23
+
24
+embedded_url = '<iframe width="540" height="304" ' +\
25
+    'data-src="//www.youtube-nocookie.com/embed/{videoid}" ' +\
26
+    'frameborder="0" allowfullscreen></iframe>'
23
 
27
 
24
 
28
 
25
 # do search-request
29
 # do search-request
60
         if url.endswith('&'):
64
         if url.endswith('&'):
61
             url = url[:-1]
65
             url = url[:-1]
62
 
66
 
67
+        videoid = url[32:]
68
+
63
         title = result['title']['$t']
69
         title = result['title']['$t']
64
         content = ''
70
         content = ''
65
         thumbnail = ''
71
         thumbnail = ''
72
 
78
 
73
         content = result['content']['$t']
79
         content = result['content']['$t']
74
 
80
 
81
+        embedded = embedded_url.format(videoid=videoid)
82
+
75
         # append result
83
         # append result
76
         results.append({'url': url,
84
         results.append({'url': url,
77
                         'title': title,
85
                         'title': title,
78
                         'content': content,
86
                         'content': content,
79
                         'template': 'videos.html',
87
                         'template': 'videos.html',
80
                         'publishedDate': publishedDate,
88
                         'publishedDate': publishedDate,
89
+                        'embedded': embedded,
81
                         'thumbnail': thumbnail})
90
                         'thumbnail': thumbnail})
82
 
91
 
83
     # return results
92
     # return results

+ 5
- 3
searx/https_rewrite.py View File

20
 from lxml import etree
20
 from lxml import etree
21
 from os import listdir
21
 from os import listdir
22
 from os.path import isfile, isdir, join
22
 from os.path import isfile, isdir, join
23
+from searx import logger
23
 
24
 
24
 
25
 
26
+logger = logger.getChild("https_rewrite")
27
+
25
 # https://gitweb.torproject.org/\
28
 # https://gitweb.torproject.org/\
26
 # pde/https-everywhere.git/tree/4.0:/src/chrome/content/rules
29
 # pde/https-everywhere.git/tree/4.0:/src/chrome/content/rules
27
 
30
 
131
 def load_https_rules(rules_path):
134
 def load_https_rules(rules_path):
132
     # check if directory exists
135
     # check if directory exists
133
     if not isdir(rules_path):
136
     if not isdir(rules_path):
134
-        print("[E] directory not found: '" + rules_path + "'")
137
+        logger.error("directory not found: '" + rules_path + "'")
135
         return
138
         return
136
 
139
 
137
     # search all xml files which are stored in the https rule directory
140
     # search all xml files which are stored in the https rule directory
151
         # append ruleset
154
         # append ruleset
152
         https_rules.append(ruleset)
155
         https_rules.append(ruleset)
153
 
156
 
154
-    print(' * {n} https-rules loaded'.format(n=len(https_rules)))
155
-
157
+    logger.info('{n} rules loaded'.format(n=len(https_rules)))
156
 
158
 
157
 
159
 
158
 def https_url_rewrite(result):
160
 def https_url_rewrite(result):

+ 16
- 8
searx/search.py View File

29
 from searx.languages import language_codes
29
 from searx.languages import language_codes
30
 from searx.utils import gen_useragent
30
 from searx.utils import gen_useragent
31
 from searx.query import Query
31
 from searx.query import Query
32
+from searx import logger
32
 
33
 
33
 
34
 
35
+logger = logger.getChild('search')
36
+
34
 number_of_searches = 0
37
 number_of_searches = 0
35
 
38
 
36
 
39
 
37
 def search_request_wrapper(fn, url, engine_name, **kwargs):
40
 def search_request_wrapper(fn, url, engine_name, **kwargs):
38
     try:
41
     try:
39
         return fn(url, **kwargs)
42
         return fn(url, **kwargs)
40
-    except Exception, e:
43
+    except:
41
         # increase errors stats
44
         # increase errors stats
42
         engines[engine_name].stats['errors'] += 1
45
         engines[engine_name].stats['errors'] += 1
43
 
46
 
44
         # print engine name and specific error message
47
         # print engine name and specific error message
45
-        print('[E] Error with engine "{0}":\n\t{1}'.format(
46
-            engine_name, str(e)))
48
+        logger.exception('engine crash: {0}'.format(engine_name))
47
         return
49
         return
48
 
50
 
49
 
51
 
66
             remaining_time = max(0.0, timeout_limit - (time() - search_start))
68
             remaining_time = max(0.0, timeout_limit - (time() - search_start))
67
             th.join(remaining_time)
69
             th.join(remaining_time)
68
             if th.isAlive():
70
             if th.isAlive():
69
-                print('engine timeout: {0}'.format(th._engine_name))
70
-
71
+                logger.warning('engine timeout: {0}'.format(th._engine_name))
71
 
72
 
72
 
73
 
73
 # get default reqest parameter
74
 # get default reqest parameter
74
 def default_request_params():
75
 def default_request_params():
75
     return {
76
     return {
76
-        'method': 'GET', 'headers': {}, 'data': {}, 'url': '', 'cookies': {}, 'verify': True}
77
+        'method': 'GET',
78
+        'headers': {},
79
+        'data': {},
80
+        'url': '',
81
+        'cookies': {},
82
+        'verify': True
83
+    }
77
 
84
 
78
 
85
 
79
 # create a callback wrapper for the search engine results
86
 # create a callback wrapper for the search engine results
487
                 continue
494
                 continue
488
 
495
 
489
             # append request to list
496
             # append request to list
490
-            requests.append((req, request_params['url'], request_args, selected_engine['name']))
497
+            requests.append((req, request_params['url'],
498
+                             request_args,
499
+                             selected_engine['name']))
491
 
500
 
492
         if not requests:
501
         if not requests:
493
             return results, suggestions, answers, infoboxes
502
             return results, suggestions, answers, infoboxes
494
         # send all search-request
503
         # send all search-request
495
         threaded_requests(requests)
504
         threaded_requests(requests)
496
 
505
 
497
-
498
         while not results_queue.empty():
506
         while not results_queue.empty():
499
             engine_name, engine_results = results_queue.get_nowait()
507
             engine_name, engine_results = results_queue.get_nowait()
500
 
508
 

+ 49
- 2
searx/settings.yml View File

35
     engine : currency_convert
35
     engine : currency_convert
36
     categories : general
36
     categories : general
37
     shortcut : cc
37
     shortcut : cc
38
+    
39
+  - name : deezer
40
+    engine : deezer
41
+    shortcut : dz
38
 
42
 
39
   - name : deviantart
43
   - name : deviantart
40
     engine : deviantart
44
     engine : deviantart
44
   - name : ddg definitions
48
   - name : ddg definitions
45
     engine : duckduckgo_definitions
49
     engine : duckduckgo_definitions
46
     shortcut : ddd
50
     shortcut : ddd
51
+    
52
+  - name : digg
53
+    engine : digg
54
+    shortcut : dg
47
 
55
 
48
   - name : wikidata
56
   - name : wikidata
49
     engine : wikidata
57
     engine : wikidata
70
     shortcut : px
78
     shortcut : px
71
 
79
 
72
   - name : flickr
80
   - name : flickr
73
-    engine : flickr
74
     categories : images
81
     categories : images
75
     shortcut : fl
82
     shortcut : fl
76
-    timeout: 3.0
83
+# You can use the engine using the official stable API, but you need an API key
84
+# See : https://www.flickr.com/services/apps/create/
85
+#    engine : flickr
86
+#    api_key: 'apikey' # required!
87
+# Or you can use the html non-stable engine, activated by default
88
+    engine : flickr-noapi
77
 
89
 
78
   - name : general-file
90
   - name : general-file
79
     engine : generalfile
91
     engine : generalfile
95
     engine : google_news
107
     engine : google_news
96
     shortcut : gon
108
     shortcut : gon
97
 
109
 
110
+  - name : google play apps
111
+    engine        : xpath
112
+    search_url    : https://play.google.com/store/search?q={query}&c=apps
113
+    url_xpath     : //a[@class="title"]/@href
114
+    title_xpath   : //a[@class="title"]
115
+    content_xpath : //a[@class="subtitle"]
116
+    categories : files
117
+    shortcut : gpa
118
+    
119
+  - name : google play movies
120
+    engine        : xpath
121
+    search_url    : https://play.google.com/store/search?q={query}&c=movies
122
+    url_xpath     : //a[@class="title"]/@href
123
+    title_xpath   : //a[@class="title"]
124
+    content_xpath : //a[@class="subtitle"]
125
+    categories : videos
126
+    shortcut : gpm
127
+    
128
+  - name : google play music
129
+    engine        : xpath
130
+    search_url    : https://play.google.com/store/search?q={query}&c=music
131
+    url_xpath     : //a[@class="title"]/@href
132
+    title_xpath   : //a[@class="title"]
133
+    content_xpath : //a[@class="subtitle"]
134
+    categories : music
135
+    shortcut : gps
136
+    
98
   - name : openstreetmap
137
   - name : openstreetmap
99
     engine : openstreetmap
138
     engine : openstreetmap
100
     shortcut : osm
139
     shortcut : osm
127
     engine : searchcode_code
166
     engine : searchcode_code
128
     shortcut : scc
167
     shortcut : scc
129
 
168
 
169
+  - name : subtitleseeker
170
+    engine : subtitleseeker
171
+    shortcut : ss
172
+# The language is an option. You can put any language written in english
173
+# Examples : English, French, German, Hungarian, Chinese...
174
+#    language : English
175
+
130
   - name : startpage
176
   - name : startpage
131
     engine : startpage
177
     engine : startpage
132
     shortcut : sp
178
     shortcut : sp
194
     it : Italiano
240
     it : Italiano
195
     nl : Nederlands
241
     nl : Nederlands
196
     ja : 日本語 (Japanese)
242
     ja : 日本語 (Japanese)
243
+    tr : Türkçe

+ 0
- 2
searx/static/oscar/js/searx.min.js View File

1
-/*! oscar/searx.min.js | 22-12-2014 | https://github.com/asciimoo/searx */
2
-requirejs.config({baseUrl:"./static/oscar/js",paths:{app:"../app"}}),searx.autocompleter&&(searx.searchResults=new Bloodhound({datumTokenizer:Bloodhound.tokenizers.obj.whitespace("value"),queryTokenizer:Bloodhound.tokenizers.whitespace,remote:"/autocompleter?q=%QUERY"}),searx.searchResults.initialize()),$(document).ready(function(){searx.autocompleter&&$("#q").typeahead(null,{name:"search-results",displayKey:function(a){return a},source:searx.searchResults.ttAdapter()})}),$(document).ready(function(){$("#q.autofocus").focus(),$(".select-all-on-click").click(function(){$(this).select()}),$(".btn-collapse").click(function(){var a=$(this).data("btn-text-collapsed"),b=$(this).data("btn-text-not-collapsed");""!==a&&""!==b&&(new_html=$(this).hasClass("collapsed")?$(this).html().replace(a,b):$(this).html().replace(b,a),$(this).html(new_html))}),$(".btn-toggle .btn").click(function(){var a="btn-"+$(this).data("btn-class"),b=$(this).data("btn-label-default"),c=$(this).data("btn-label-toggled");""!==c&&(new_html=$(this).hasClass("btn-default")?$(this).html().replace(b,c):$(this).html().replace(c,b),$(this).html(new_html)),$(this).toggleClass(a),$(this).toggleClass("btn-default")}),$(".btn-sm").dblclick(function(){var a="btn-"+$(this).data("btn-class");$(this).hasClass("btn-default")?($(".btn-sm > input").attr("checked","checked"),$(".btn-sm > input").prop("checked",!0),$(".btn-sm").addClass(a),$(".btn-sm").addClass("active"),$(".btn-sm").removeClass("btn-default")):($(".btn-sm > input").attr("checked",""),$(".btn-sm > input").removeAttr("checked"),$(".btn-sm > input").checked=!1,$(".btn-sm").removeClass(a),$(".btn-sm").removeClass("active"),$(".btn-sm").addClass("btn-default"))})}),$(document).ready(function(){$(".searx_overpass_request").on("click",function(a){var b="https://overpass-api.de/api/interpreter?data=",c=b+"[out:json][timeout:25];(",d=");out meta;",e=$(this).data("osm-id"),f=$(this).data("osm-type"),g=$(this).data("result-table"),h="#"+$(this).data("result-table-loadicon"),i=["addr:city","addr:country","addr:housenumber","addr:postcode","addr:street"];if(e&&f&&g){g="#"+g;var j=null;switch(f){case"node":j=c+"node("+e+");"+d;break;case"way":j=c+"way("+e+");"+d;break;case"relation":j=c+"relation("+e+");"+d}if(j){$.ajax(j).done(function(a){if(a&&a.elements&&a.elements[0]){var b=a.elements[0],c=$(g).html();for(var d in b.tags)if(null===b.tags.name||-1==i.indexOf(d)){switch(c+="<tr><td>"+d+"</td><td>",d){case"phone":case"fax":c+='<a href="tel:'+b.tags[d].replace(/ /g,"")+'">'+b.tags[d]+"</a>";break;case"email":c+='<a href="mailto:'+b.tags[d]+'">'+b.tags[d]+"</a>";break;case"website":case"url":c+='<a href="'+b.tags[d]+'">'+b.tags[d]+"</a>";break;case"wikidata":c+='<a href="https://www.wikidata.org/wiki/'+b.tags[d]+'">'+b.tags[d]+"</a>";break;case"wikipedia":if(-1!=b.tags[d].indexOf(":")){c+='<a href="https://'+b.tags[d].substring(0,b.tags[d].indexOf(":"))+".wikipedia.org/wiki/"+b.tags[d].substring(b.tags[d].indexOf(":")+1)+'">'+b.tags[d]+"</a>";break}default:c+=b.tags[d]}c+="</td></tr>"}$(g).html(c),$(g).removeClass("hidden"),$(h).addClass("hidden")}}).fail(function(){$(h).html($(h).html()+'<p class="text-muted">could not load data!</p>')})}}$(this).off(a)}),$(".searx_init_map").on("click",function(a){var b=$(this).data("leaflet-target"),c=$(this).data("map-lon"),d=$(this).data("map-lat"),e=$(this).data("map-zoom"),f=$(this).data("map-boundingbox"),g=$(this).data("map-geojson");require(["leaflet-0.7.3.min"],function(){f&&(southWest=L.latLng(f[0],f[2]),northEast=L.latLng(f[1],f[3]),map_bounds=L.latLngBounds(southWest,northEast)),L.Icon.Default.imagePath="./static/oscar/img/map";{var a=L.map(b),h="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png",i='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors',j=new L.TileLayer(h,{minZoom:1,maxZoom:19,attribution:i}),k="http://otile{s}.mqcdn.com/tiles/1.0.0/map/{z}/{x}/{y}.jpg",l='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors | Tiles Courtesy of <a href="http://www.mapquest.com/" target="_blank">MapQuest</a> <img src="http://developer.mapquest.com/content/osm/mq_logo.png">',m=new L.TileLayer(k,{minZoom:1,maxZoom:18,subdomains:"1234",attribution:l}),n="http://otile{s}.mqcdn.com/tiles/1.0.0/sat/{z}/{x}/{y}.jpg",o='Map data © <a href="https://openstreetmap.org">OpenStreetMap</a> contributors | Tiles Courtesy of <a href="http://www.mapquest.com/" target="_blank">MapQuest</a> <img src="https://developer.mapquest.com/content/osm/mq_logo.png"> | Portions Courtesy NASA/JPL-Caltech and U.S. Depart. of Agriculture, Farm Service Agency';new L.TileLayer(n,{minZoom:1,maxZoom:11,subdomains:"1234",attribution:o})}map_bounds?setTimeout(function(){a.fitBounds(map_bounds,{maxZoom:17})},0):c&&d&&(e?a.setView(new L.LatLng(d,c),e):a.setView(new L.LatLng(d,c),8)),a.addLayer(m);var p={"OSM Mapnik":j,MapQuest:m};L.control.layers(p).addTo(a),g&&L.geoJson(g).addTo(a)}),$(this).off(a)})});

searx/static/courgette/css/style.css → searx/static/themes/courgette/css/style.css View File


searx/static/courgette/img/bg-body-index.jpg → searx/static/themes/courgette/img/bg-body-index.jpg View File


searx/static/oscar/img/favicon.png → searx/static/themes/courgette/img/favicon.png View File


searx/static/default/img/github_ribbon.png → searx/static/themes/courgette/img/github_ribbon.png View File


searx/static/default/img/icon_dailymotion.ico → searx/static/themes/courgette/img/icons/icon_dailymotion.ico View File


searx/static/default/img/icon_deviantart.ico → searx/static/themes/courgette/img/icons/icon_deviantart.ico View File


searx/static/default/img/icon_github.ico → searx/static/themes/courgette/img/icons/icon_github.ico View File


searx/static/default/img/icon_kickass.ico → searx/static/themes/courgette/img/icons/icon_kickass.ico View File


searx/static/default/img/icon_soundcloud.ico → searx/static/themes/courgette/img/icons/icon_soundcloud.ico View File


searx/static/default/img/icon_stackoverflow.ico → searx/static/themes/courgette/img/icons/icon_stackoverflow.ico View File


searx/static/default/img/icon_twitter.ico → searx/static/themes/courgette/img/icons/icon_twitter.ico View File


searx/static/default/img/icon_vimeo.ico → searx/static/themes/courgette/img/icons/icon_vimeo.ico View File


searx/static/default/img/icon_wikipedia.ico → searx/static/themes/courgette/img/icons/icon_wikipedia.ico View File


searx/static/default/img/icon_youtube.ico → searx/static/themes/courgette/img/icons/icon_youtube.ico View File


searx/static/courgette/img/preference-icon.png → searx/static/themes/courgette/img/preference-icon.png View File


searx/static/courgette/img/search-icon.png → searx/static/themes/courgette/img/search-icon.png View File


searx/static/courgette/img/searx-mobile.png → searx/static/themes/courgette/img/searx-mobile.png View File


searx/static/courgette/img/searx.png → searx/static/themes/courgette/img/searx.png View File


searx/static/default/img/searx_logo.svg → searx/static/themes/courgette/img/searx_logo.svg View File


searx/static/default/js/mootools-autocompleter-1.1.2-min.js → searx/static/themes/courgette/js/mootools-autocompleter-1.1.2-min.js View File


searx/static/default/js/mootools-core-1.4.5-min.js → searx/static/themes/courgette/js/mootools-core-1.4.5-min.js View File


searx/static/courgette/js/searx.js → searx/static/themes/courgette/js/searx.js View File


searx/static/default/css/style.css → searx/static/themes/default/css/style.css View File


searx/static/default/img/favicon.png → searx/static/themes/default/img/favicon.png View File


searx/static/courgette/img/github_ribbon.png → searx/static/themes/default/img/github_ribbon.png View File


searx/static/courgette/img/icon_dailymotion.ico → searx/static/themes/default/img/icons/icon_dailymotion.ico View File


searx/static/courgette/img/icon_deviantart.ico → searx/static/themes/default/img/icons/icon_deviantart.ico View File


searx/static/courgette/img/icon_github.ico → searx/static/themes/default/img/icons/icon_github.ico View File


searx/static/courgette/img/icon_kickass.ico → searx/static/themes/default/img/icons/icon_kickass.ico View File


searx/static/courgette/img/icon_soundcloud.ico → searx/static/themes/default/img/icons/icon_soundcloud.ico View File


searx/static/courgette/img/icon_stackoverflow.ico → searx/static/themes/default/img/icons/icon_stackoverflow.ico View File


searx/static/courgette/img/icon_twitter.ico → searx/static/themes/default/img/icons/icon_twitter.ico View File


searx/static/courgette/img/icon_vimeo.ico → searx/static/themes/default/img/icons/icon_vimeo.ico View File


searx/static/courgette/img/icon_wikipedia.ico → searx/static/themes/default/img/icons/icon_wikipedia.ico View File


searx/static/courgette/img/icon_youtube.ico → searx/static/themes/default/img/icons/icon_youtube.ico View File


searx/static/default/img/preference-icon.png → searx/static/themes/default/img/preference-icon.png View File


searx/static/default/img/search-icon.png → searx/static/themes/default/img/search-icon.png View File


searx/static/default/img/searx.png → searx/static/themes/default/img/searx.png View File


searx/static/courgette/img/searx_logo.svg → searx/static/themes/default/img/searx_logo.svg View File


searx/static/courgette/js/mootools-autocompleter-1.1.2-min.js → searx/static/themes/default/js/mootools-autocompleter-1.1.2-min.js View File


searx/static/courgette/js/mootools-core-1.4.5-min.js → searx/static/themes/default/js/mootools-core-1.4.5-min.js View File


searx/static/default/js/searx.js → searx/static/themes/default/js/searx.js View File


searx/static/default/less/autocompleter.less → searx/static/themes/default/less/autocompleter.less View File


searx/static/default/less/code.less → searx/static/themes/default/less/code.less View File


searx/static/default/less/definitions.less → searx/static/themes/default/less/definitions.less View File


searx/static/default/less/mixins.less → searx/static/themes/default/less/mixins.less View File


searx/static/default/less/search.less → searx/static/themes/default/less/search.less View File


searx/static/default/less/style.less → searx/static/themes/default/less/style.less View File


searx/static/oscar/.gitignore → searx/static/themes/oscar/.gitignore View File


searx/static/oscar/README.rst → searx/static/themes/oscar/README.rst View File

1
 install dependencies
1
 install dependencies
2
 ~~~~~~~~~~~~~~~~~~~~
2
 ~~~~~~~~~~~~~~~~~~~~
3
 
3
 
4
-run this command in the directory ``searx/static/oscar``
4
+run this command in the directory ``searx/static/themes/oscar``
5
 
5
 
6
 ``npm install``
6
 ``npm install``
7
 
7
 
8
 compile sources
8
 compile sources
9
 ~~~~~~~~~~~~~~~
9
 ~~~~~~~~~~~~~~~
10
 
10
 
11
-run this command in the directory ``searx/static/oscar``
11
+run this command in the directory ``searx/static/themes/oscar``
12
 
12
 
13
 ``grunt``
13
 ``grunt``
14
 
14
 

searx/static/oscar/css/bootstrap.min.css → searx/static/themes/oscar/css/bootstrap.min.css View File


searx/static/oscar/css/leaflet.min.css → searx/static/themes/oscar/css/leaflet.min.css View File


searx/static/oscar/css/oscar.min.css → searx/static/themes/oscar/css/oscar.min.css View File


searx/static/oscar/fonts/glyphicons-halflings-regular.eot → searx/static/themes/oscar/fonts/glyphicons-halflings-regular.eot View File


searx/static/oscar/fonts/glyphicons-halflings-regular.svg → searx/static/themes/oscar/fonts/glyphicons-halflings-regular.svg View File


searx/static/oscar/fonts/glyphicons-halflings-regular.ttf → searx/static/themes/oscar/fonts/glyphicons-halflings-regular.ttf View File


searx/static/oscar/fonts/glyphicons-halflings-regular.woff → searx/static/themes/oscar/fonts/glyphicons-halflings-regular.woff View File


searx/static/oscar/gruntfile.js → searx/static/themes/oscar/gruntfile.js View File


searx/static/courgette/img/favicon.png → searx/static/themes/oscar/img/favicon.png View File


searx/static/oscar/img/icons/README.md → searx/static/themes/oscar/img/icons/README.md View File


searx/static/oscar/img/icons/amazon.png → searx/static/themes/oscar/img/icons/amazon.png View File


searx/static/oscar/img/icons/dailymotion.png → searx/static/themes/oscar/img/icons/dailymotion.png View File


searx/static/oscar/img/icons/deviantart.png → searx/static/themes/oscar/img/icons/deviantart.png View File


searx/static/oscar/img/icons/facebook.png → searx/static/themes/oscar/img/icons/facebook.png View File


searx/static/oscar/img/icons/flickr.png → searx/static/themes/oscar/img/icons/flickr.png View File


searx/static/oscar/img/icons/github.png → searx/static/themes/oscar/img/icons/github.png View File


searx/static/oscar/img/icons/kickass.png → searx/static/themes/oscar/img/icons/kickass.png View File


BIN
searx/static/themes/oscar/img/icons/openstreetmap.png View File


BIN
searx/static/themes/oscar/img/icons/photon.png View File


BIN
searx/static/themes/oscar/img/icons/searchcode code.png View File


BIN
searx/static/themes/oscar/img/icons/searchcode doc.png View File


searx/static/oscar/img/icons/soundcloud.png → searx/static/themes/oscar/img/icons/soundcloud.png View File


searx/static/oscar/img/icons/stackoverflow.png → searx/static/themes/oscar/img/icons/stackoverflow.png View File


Some files were not shown because too many files changed in this diff