Web   ·   Wiki   ·   Activities   ·   Blog   ·   Lists   ·   Chat   ·   Meeting   ·   Bugs   ·   Git   ·   Translate   ·   Archive   ·   People   ·   Donate
summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorSebastian Silva <sebastian@sugarlabs.org>2011-12-09 00:20:29 (GMT)
committer Sebastian Silva <sebastian@sugarlabs.org>2011-12-09 00:20:29 (GMT)
commit6968dc34e9e6b00b08d312385011bbaac68c5a49 (patch)
tree76c1e1928baa4bc7ec1db709431897a667960334
parent39e7a3959f9bc95bc191ea1ac1ea06db63bf592a (diff)
Moved to devel version of Hatta. Implemented shutdown.HEADmaster
-rw-r--r--docs/.hg/cache/tags2
-rw-r--r--docs/.hg/dirstatebin105 -> 176 bytes
-rw-r--r--docs/.hg/hatta/cache/index.sqlite3bin16384 -> 21504 bytes
-rw-r--r--docs/.hg/hatta/cache/render/hexoquinasa_orig.jpg/128x128.pngbin0 -> 15025 bytes
-rw-r--r--docs/.hg/store/00changelog.ibin2131 -> 3169 bytes
-rw-r--r--docs/.hg/store/00manifest.ibin1787 -> 2845 bytes
-rw-r--r--docs/.hg/store/data/_home.ibin1985 -> 2838 bytes
-rw-r--r--docs/.hg/store/data/_project%20_road_map.ibin0 -> 354 bytes
-rw-r--r--docs/.hg/store/data/hexoquinasa__orig.jpg.ibin0 -> 54547 bytes
-rw-r--r--docs/.hg/store/fncache2
-rw-r--r--docs/.hg/store/undobin54 -> 54 bytes
-rw-r--r--docs/.hg/undo.desc2
-rw-r--r--docs/.hg/undo.dirstatebin105 -> 176 bytes
-rw-r--r--docs/Home20
-rw-r--r--docs/Roadmap20
-rw-r--r--docs/hexoquinasa_orig.jpgbin0 -> 57405 bytes
-rwxr-xr-xwebsdk/hatta/__init__.py36
-rw-r--r--websdk/hatta/__main__.py60
-rw-r--r--websdk/hatta/config.py224
-rw-r--r--websdk/hatta/data.py112
-rw-r--r--websdk/hatta/error.py40
-rw-r--r--websdk/hatta/hg_integration.py24
-rw-r--r--websdk/hatta/page.py656
-rw-r--r--websdk/hatta/parser.py529
-rw-r--r--websdk/hatta/search.py317
-rw-r--r--websdk/hatta/storage.py586
-rw-r--r--websdk/hatta/templates/backlinks.html18
-rw-r--r--websdk/hatta/templates/base.html59
-rw-r--r--websdk/hatta/templates/changes.html16
-rw-r--r--websdk/hatta/templates/edit_file.html25
-rw-r--r--websdk/hatta/templates/edit_text.html29
-rw-r--r--websdk/hatta/templates/history.html27
-rw-r--r--websdk/hatta/templates/layout.html19
-rw-r--r--websdk/hatta/templates/list.html10
-rw-r--r--websdk/hatta/templates/page.html15
-rw-r--r--websdk/hatta/templates/page_special.html13
-rw-r--r--websdk/hatta/templates/wanted.html17
-rw-r--r--websdk/hatta/wiki.py954
38 files changed, 3827 insertions, 5 deletions
diff --git a/docs/.hg/cache/tags b/docs/.hg/cache/tags
new file mode 100644
index 0000000..6676316
--- /dev/null
+++ b/docs/.hg/cache/tags
@@ -0,0 +1,2 @@
+21 841fe3e729aec3b7939e7c69e784601e0fd7b00b
+
diff --git a/docs/.hg/dirstate b/docs/.hg/dirstate
index a3d1428..3633f47 100644
--- a/docs/.hg/dirstate
+++ b/docs/.hg/dirstate
Binary files differ
diff --git a/docs/.hg/hatta/cache/index.sqlite3 b/docs/.hg/hatta/cache/index.sqlite3
index 398db33..a04b53b 100644
--- a/docs/.hg/hatta/cache/index.sqlite3
+++ b/docs/.hg/hatta/cache/index.sqlite3
Binary files differ
diff --git a/docs/.hg/hatta/cache/render/hexoquinasa_orig.jpg/128x128.png b/docs/.hg/hatta/cache/render/hexoquinasa_orig.jpg/128x128.png
new file mode 100644
index 0000000..c5aabc7
--- /dev/null
+++ b/docs/.hg/hatta/cache/render/hexoquinasa_orig.jpg/128x128.png
Binary files differ
diff --git a/docs/.hg/store/00changelog.i b/docs/.hg/store/00changelog.i
index dd04f11..eb62c1b 100644
--- a/docs/.hg/store/00changelog.i
+++ b/docs/.hg/store/00changelog.i
Binary files differ
diff --git a/docs/.hg/store/00manifest.i b/docs/.hg/store/00manifest.i
index d56606a..822a512 100644
--- a/docs/.hg/store/00manifest.i
+++ b/docs/.hg/store/00manifest.i
Binary files differ
diff --git a/docs/.hg/store/data/_home.i b/docs/.hg/store/data/_home.i
index e6f3f18..dbe101e 100644
--- a/docs/.hg/store/data/_home.i
+++ b/docs/.hg/store/data/_home.i
Binary files differ
diff --git a/docs/.hg/store/data/_project%20_road_map.i b/docs/.hg/store/data/_project%20_road_map.i
new file mode 100644
index 0000000..af320e3
--- /dev/null
+++ b/docs/.hg/store/data/_project%20_road_map.i
Binary files differ
diff --git a/docs/.hg/store/data/hexoquinasa__orig.jpg.i b/docs/.hg/store/data/hexoquinasa__orig.jpg.i
new file mode 100644
index 0000000..d579f9b
--- /dev/null
+++ b/docs/.hg/store/data/hexoquinasa__orig.jpg.i
Binary files differ
diff --git a/docs/.hg/store/fncache b/docs/.hg/store/fncache
index 1fbda5b..00c3f6b 100644
--- a/docs/.hg/store/fncache
+++ b/docs/.hg/store/fncache
@@ -1,2 +1,4 @@
data/Home.i
data/Como%20editar%20esta%20Wiki.i
+data/hexoquinasa_orig.jpg.i
+data/Project%20RoadMap.i
diff --git a/docs/.hg/store/undo b/docs/.hg/store/undo
index c56b632..074a0a2 100644
--- a/docs/.hg/store/undo
+++ b/docs/.hg/store/undo
Binary files differ
diff --git a/docs/.hg/undo.desc b/docs/.hg/undo.desc
index 0700973..15fed6a 100644
--- a/docs/.hg/undo.desc
+++ b/docs/.hg/undo.desc
@@ -1,2 +1,2 @@
-14
+21
commit
diff --git a/docs/.hg/undo.dirstate b/docs/.hg/undo.dirstate
index d13aca8..6c99a4b 100644
--- a/docs/.hg/undo.dirstate
+++ b/docs/.hg/undo.dirstate
Binary files differ
diff --git a/docs/Home b/docs/Home
index aef4359..1b21ca6 100644
--- a/docs/Home
+++ b/docs/Home
@@ -1,15 +1,27 @@
= Puno Pilot Deployment Team =
-== Hexoquinasa Distribution by Sistemas Sustentables SAS ==
+== Hexoquinasa Distribution by SomosAZUCAR.org ==
-**Mission**
+{{hexoquinasa_orig.jpg|Hexoquinasa 1}}
+
+**Mission:**
To measure and improve the user experience of learners from Puno-Region by localizing, distributing and supporting software based on their needs and conditions.
-**Main Objective**
+**Main Objective:**
Field Testing + User Support + Product Development
+**Implementation:**
+A methodology is proposed for the sustainable support, monitoring and continuous improvement of a Sugar deployment.
+
+== Project Documentation ==
+
+* [[Roadmap]]
+* [[Operating System Image]]
+* [[Action Tracker]]
+* [[Activity Library]]
+
Hexoquinasa currently consists of a friendly, documented fork of Dextrose 3 and Toast 5. In order to provide support in a distributed fashion the process of building the Operating System must be distributed as well. So Hexoquinasa is a meta-project: a methodology for sustaining a Sugar Distribution.
* [[Como editar esta Wiki]]
* [[Como sincronizar esta Wiki]]
* [[Como obtener soporte]]
-* [[Como documentar este proyecto]] \ No newline at end of file
+* [[Como documentar este proyecto]]
diff --git a/docs/Roadmap b/docs/Roadmap
new file mode 100644
index 0000000..8268fc9
--- /dev/null
+++ b/docs/Roadmap
@@ -0,0 +1,20 @@
+== Tasks for Technical Team ==
+
+From the perspective of roles:
+
+===1. Students===
+* Consult [[Activity Library]]
+* Return [[Statistics]]
+
+===2. Teachers===
+* Mantain their Laptops as [[Schoolserver]]
+* Facilitate [[Backups]]
+* Use and share the [[Wiki]]
+* Use and share the [[Action Tracker]]
+* Share packages updates
+* Host [[Activity Library]]
+* Collect [[Statistics]]
+* Provide [[Activation]]
+
+===3. Coaches===
+* Bring USB keys to [[synchronize in both directions]] \ No newline at end of file
diff --git a/docs/hexoquinasa_orig.jpg b/docs/hexoquinasa_orig.jpg
new file mode 100644
index 0000000..dd7e4fc
--- /dev/null
+++ b/docs/hexoquinasa_orig.jpg
Binary files differ
diff --git a/websdk/hatta/__init__.py b/websdk/hatta/__init__.py
new file mode 100755
index 0000000..76269a5
--- /dev/null
+++ b/websdk/hatta/__init__.py
@@ -0,0 +1,36 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# @copyright: 2008-2009 Radomir Dopieralski <hatta@sheep.art.pl>
+# @license: GNU GPL, see COPYING for details.
+
+"""
+Hatta Wiki is a wiki engine designed to be used with Mercurial repositories.
+It requires Mercurial and Werkzeug python modules.
+
+Hatta's pages are just plain text files (and also images, binaries, etc.) in
+some directory in your repository. For example, you can put it in your
+project's "docs" directory to keep documentation. The files can be edited both
+from the wiki or with a text editor -- in either case the changes committed to
+the repository will appear in the recent changes and in page's history.
+
+See hatta.py --help for usage.
+"""
+
+# Exposed API
+from wiki import Wiki, WikiResponse, WikiRequest
+from config import WikiConfig, read_config
+from __main__ import main
+from parser import WikiParser, WikiWikiParser
+from storage import WikiStorage, WikiSubdirectoryStorage
+from page import WikiPage, WikiPageText, WikiPageWiki
+from page import WikiPageColorText, WikiPageFile, WikiPageImage
+
+# Project's metainformation
+__version__ = '1.4.1dev'
+project_name = 'Hatta'
+project_url = 'http://hatta-wiki.org/'
+project_description = 'Wiki engine that lives in Mercurial repository.'
+
+# Make it work as Mercurial extension
+from hg_integration import cmdtable
diff --git a/websdk/hatta/__main__.py b/websdk/hatta/__main__.py
new file mode 100644
index 0000000..55b51a1
--- /dev/null
+++ b/websdk/hatta/__main__.py
@@ -0,0 +1,60 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import os
+import sys
+
+from config import read_config
+from wiki import Wiki
+
+# Avoid WSGI errors, see http://mercurial.selenic.com/bts/issue1095
+sys.stdout = sys.__stdout__
+sys.stderr = sys.__stderr__
+
+
+def application(env, start):
+ """Detect that we are being run as WSGI application."""
+
+ global application
+ config = read_config()
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ if config.get('pages_path') is None:
+ config.set('pages_path', os.path.join(script_dir, 'docs'))
+ wiki = Wiki(config)
+ application = wiki.application
+ return application(env, start)
+
+
+def main(config=None, wiki=None):
+ """Start a standalone WSGI server."""
+
+ config = config or read_config()
+ wiki = wiki or Wiki(config)
+ app = wiki.application
+
+ host, port = (config.get('interface', '0.0.0.0'),
+ int(config.get('port', 8080)))
+ try:
+ from cherrypy import wsgiserver
+ except ImportError:
+ try:
+ from cherrypy import _cpwsgiserver as wsgiserver
+ except ImportError:
+ import wsgiref.simple_server
+ server = wsgiref.simple_server.make_server(host, port, app)
+ try:
+ server.serve_forever()
+ except KeyboardInterrupt:
+ pass
+ return
+ apps = [('', app)]
+ name = wiki.site_name
+ server = wsgiserver.CherryPyWSGIServer((host, port), apps,
+ server_name=name)
+ try:
+ server.start()
+ except KeyboardInterrupt:
+ server.stop()
+
+if __name__ == "__main__":
+ main()
diff --git a/websdk/hatta/config.py b/websdk/hatta/config.py
new file mode 100644
index 0000000..bcdc9a8
--- /dev/null
+++ b/websdk/hatta/config.py
@@ -0,0 +1,224 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import os
+
+
+OPTIONS = []
+VALID_NAMES = set()
+
+
+def _add(short, long, dest, help, default=None, metavar=None,
+ action=None, type=None):
+ """Helper for building the list of options."""
+
+ OPTIONS.append((short, long, dest, help, default, metavar, action, type))
+ VALID_NAMES.add(dest)
+
+_add('-V', '--version', dest='show_version', default=False,
+ help='Display version and exit', action="store_true")
+_add('-d', '--pages-dir', dest='pages_path',
+ help='Store pages in DIR', metavar='DIR')
+_add('-t', '--cache-dir', dest='cache_path',
+ help='Store cache in DIR', metavar='DIR')
+_add('-i', '--interface', dest='interface',
+ help='Listen on interface INT', metavar='INT')
+_add('-p', '--port', dest='port', type='int',
+ help='Listen on port PORT', metavar='PORT')
+_add('-s', '--script-name', dest='script_name',
+ help='Override SCRIPT_NAME to NAME', metavar='NAME')
+_add('-n', '--site-name', dest='site_name',
+ help='Set the name of the site to NAME', metavar='NAME')
+_add('-m', '--front-page', dest='front_page',
+ help='Use PAGE as the front page', metavar='PAGE')
+_add('-e', '--encoding', dest='page_charset',
+ help='Use encoding ENC to read and write pages', metavar='ENC')
+_add('-c', '--config-file', dest='config_file',
+ help='Read configuration from FILE', metavar='FILE')
+_add('-l', '--language', dest='language',
+ help='Translate interface to LANG', metavar='LANG')
+_add('-r', '--read-only', dest='read_only',
+ help='Whether the wiki should be read-only', action="store_true")
+_add('-g', '--icon-page', dest='icon_page', metavar="PAGE",
+ help='Read icons graphics from PAGE.')
+_add('-w', '--hgweb', dest='hgweb',
+ help='Enable hgweb access to the repository', action="store_true")
+_add('-W', '--wiki-words', dest='wiki_words',
+ help='Enable WikiWord links', action="store_true")
+_add('-I', '--ignore-indent', dest='ignore_indent',
+ help='Treat indented lines as normal text', action="store_true")
+_add('-P', '--pygments-style', dest='pygments_style',
+ help='Use the STYLE pygments style for highlighting',
+ metavar='STYLE')
+_add('-D', '--subdirectories', dest='subdirectories',
+ action="store_true",
+ help='Store subpages as subdirectories in the filesystem')
+_add('-E', '--extension', dest='extension',
+ help='Extension to add to wiki page files')
+_add('-U', '--unix-eol', dest='unix_eol',
+ action="store_true",
+ help='Convert all text pages to UNIX-style CR newlines')
+
+
+class WikiConfig(object):
+ """
+ Responsible for reading and storing site configuration. Contains the
+ default settings.
+
+ >>> config = WikiConfig(port='2080')
+ >>> config.sanitize()
+ >>> config.get('port')
+ 2080
+ """
+
+ default_filename = u'hatta.conf'
+
+ def __init__(self, **kw):
+ self.config = dict(kw)
+ self.valid_names = set(VALID_NAMES)
+ self.parse_environ()
+ self.options = list(OPTIONS)
+
+ def sanitize(self):
+ """
+ Convert options to their required types.
+ """
+
+ try:
+ self.config['port'] = int(self.get('port', 0))
+ except ValueError:
+ self.config['port'] = 8080
+
+ def parse_environ(self):
+ """Check the environment variables for options."""
+
+ prefix = 'HATTA_'
+ for key, value in os.environ.iteritems():
+ if key.startswith(prefix):
+ name = key[len(prefix):].lower()
+ if name in self.valid_names:
+ self.config[name] = value
+
+ def parse_args(self):
+ """Check the commandline arguments for options."""
+
+ import optparse
+
+ parser = optparse.OptionParser()
+ for (short, long, dest, help, default, metavar, action,
+ type) in self.options:
+ parser.add_option(short, long, dest=dest, help=help, type=type,
+ default=default, metavar=metavar, action=action)
+
+ options, args = parser.parse_args()
+ for option, value in options.__dict__.iteritems():
+ if value is not None:
+ self.config[option] = value
+ if args:
+ self.config['pages_path'] = args[0]
+
+ def parse_files(self, files=None):
+ """Check the config files for options."""
+
+ import ConfigParser
+
+ if files is None:
+ files = [self.get('config_file', self.default_filename)]
+ parser = ConfigParser.SafeConfigParser()
+ parser.read(files)
+ section = 'hatta'
+ try:
+ options = parser.items(section)
+ except ConfigParser.NoSectionError:
+ return
+ for option, value in options:
+ if option not in self.valid_names:
+ raise ValueError('Invalid option name "%s".' % option)
+ self.config[option] = value
+
+ def save_config(self, filename=None):
+ """Saves configuration to a given file."""
+ if filename is None:
+ filename = self.default_filename
+
+ import ConfigParser
+ parser = ConfigParser.RawConfigParser()
+ section = 'hatta'
+ parser.add_section(section)
+ for key, value in self.config.iteritems():
+ parser.set(section, str(key), str(value))
+
+ configfile = open(filename, 'wb')
+ try:
+ parser.write(configfile)
+ finally:
+ configfile.close()
+
+ def get(self, option, default_value=None):
+ """
+ Get the value of a config option or default if not set.
+
+ >>> config = WikiConfig(option=4)
+ >>> config.get("ziew", 3)
+ 3
+ >>> config.get("ziew")
+ >>> config.get("ziew", "ziew")
+ 'ziew'
+ >>> config.get("option")
+ 4
+ """
+
+ return self.config.get(option, default_value)
+
+ def get_bool(self, option, default_value=False):
+ """
+ Like get, only convert the value to True or False.
+ """
+
+ value = self.get(option, default_value)
+ if value in (
+ 1, True,
+ 'True', 'true', 'TRUE',
+ '1',
+ 'on', 'On', 'ON',
+ 'yes', 'Yes', 'YES',
+ 'enable', 'Enable', 'ENABLE',
+ 'enabled', 'Enabled', 'ENABLED',
+ ):
+ return True
+ elif value in (
+ None, 0, False,
+ 'False', 'false', 'FALSE',
+ '0',
+ 'off', 'Off', 'OFF',
+ 'no', 'No', 'NO',
+ 'disable', 'Disable', 'DISABLE',
+ 'disabled', 'Disabled', 'DISABLED',
+ ):
+ return False
+ else:
+ raise ValueError("expected boolean value")
+
+ def set(self, key, value):
+ self.config[key] = value
+
+
+def read_config():
+ """Read and parse the config."""
+
+ config = WikiConfig(
+ # Here you can modify the configuration: uncomment and change the ones
+ # you need. Note that it's better use environment variables or command
+ # line switches.
+
+ # interface='',
+ # port=8080,
+ # pages_path = 'docs',
+ # front_page = 'Home',
+ # site_name = 'Hatta Wiki',
+ # page_charset = 'UTF-8',
+ )
+ config.parse_args()
+ config.parse_files()
+ # config.sanitize()
+ return config
diff --git a/websdk/hatta/data.py b/websdk/hatta/data.py
new file mode 100644
index 0000000..a0784cc
--- /dev/null
+++ b/websdk/hatta/data.py
@@ -0,0 +1,112 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import base64
+
+
+icon = base64.b64decode(
+'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABHNCSVQICAgIfAhki'
+'AAAAAlwSFlzAAAEnQAABJ0BfDRroQAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBo'
+'AAALWSURBVDiNbdNLaFxlFMDx//fd19x5JdNJm0lIImPaYm2MfSUggrssXBVaChUfi1JwpQtxK7gqu'
+'LMbQQQ3bipU0G3Rgg98DBpraWob00kzM6Z5TF7tdObm3vvd46K0TBo/OLtzfnychxIRut+Zo2/19vT'
+'kLxXze6biONbGJMRipL39MJyt33rvp+rVT7rzVTfw2vFzLxwcLf/V7oSq1W4hACIkIigUtnaoNecXG'
+'2u14T8blQRAd2v7yyN/RLFR6IRM1iedSeFnUvhpDydlI9ow0lcedG3348c1djeQz+WcThjgYZMgGBG'
+'SJMEYgzGGODLEoTBYGH4DeHcXoDSSzaRVogQjyaMwhtgYcoUco+Nl5qbnubFw7fr//uB2tXp78uj4c'
+'0YJsSTESUxsDCemjjH6YhnbtbA8xaVv7n/0uGZHDx48aH8+17iLJQrf9vCdFL7tkcn7/Pb7r8zdmWP'
+'2zqwopa7sAl4/cV4NlvrPbgch7aBN1vUIOw9ZWmmw2dqkb18fQSegOrOgfD9zahfQ37/3su+ljj1T6'
+'uCnAyxtoZVGa41tWSilULWfCZdaPD986MsjQxOHdwC9PdmT2tLk0oozpxfYf2SZwp4Iz1X4UZWBe1+'
+'z9+5X+OkiruWpYr744ZMmvjn5dvrwoVHLdRzWtobY2Kwx9soyz5ZXuV9fQ5pXCBabXKuXcBwbYwxYe'
+'kIppTXAF5VP2xutrVYmm8bzM1z9foSZik1z1SWMNLW1AtMrB/gnnMJxbSxbUV2a/QHQT8Y4c+vvC8V'
+'C74VCoZcodvnxux5Msg+THCSKHy2R48YgIb/crITrreZlEYl33MKrYycvvnx88p2BUkkpRyGSEBmDi'
+'WI6QcC95UUqM9PBzdqN99fbzc9EJNwBKKUoFw+8NDY8/sFQ/8CE57l5pZRdX6kHqxurW43mv98urM9'
+'fjJPouohE8NQ1dkEayAJ5wAe2gRawJSKmO/c/aERMn5m9/ksAAAAASUVORK5CYII=')
+
+scripts = r"""function hatta_dates(){var a=document.getElementsByTagName(
+'abbr');var p=function(i){return('00'+i).slice(-2)};for(var i=0;i<a.length;++i)
+{var n=a[i];if(n.className==='date'){var m=
+/^([0-9]{4})-([0-9]{2})-([0-9]{2})T([0-9]{2}):([0-9]{2}):([0-9]{2})Z$/.exec(
+n.getAttribute('title'));var d=new Date(Date.UTC(+m[1],+m[2]-1,+m[3],+m[4],
++m[5],+m[6]));if(d){var b=-d.getTimezoneOffset()/60;if(b>=0){b="+"+b}
+n.textContent=""+d.getFullYear()+"-"+p(d.getMonth()+1)+"-"+p(d.getDate())+" "+
+p(d.getHours())+":"+p(d.getMinutes())+" GMT"+b}}}}function hatta_edit(){var b=
+document.getElementById('editortext');if(b){var c=0+
+document.location.hash.substring(1);var d=b.textContent.match(/(.*\n)/g);var
+f='';for(var i=0;i<d.length&&i<c;++i){f+=d[i]}b.focus();if(b.setSelectionRange)
+{b.setSelectionRange(f.length,f.length)}else if(b.createTextRange){var g=
+b.createTextRange();g.collapse(true);g.moveEnd('character',f.length);
+g.moveStart('character',f.length);g.select()}var h=document.createElement('pre'
+);b.parentNode.appendChild(h);var k=window.getComputedStyle(b,'');h.style.font=
+k.font;h.style.border=k.border;h.style.outline=k.outline;h.style.lineHeight=
+k.lineHeight;h.style.letterSpacing=k.letterSpacing;h.style.fontFamily=
+k.fontFamily;h.style.fontSize=k.fontSize;h.style.padding=0;h.style.overflow=
+'scroll';try{h.style.whiteSpace="-moz-pre-wrap"}catch(e){};try{
+h.style.whiteSpace="-o-pre-wrap"}catch(e){};try{h.style.whiteSpace="-pre-wrap"
+}catch(e){};try{h.style.whiteSpace="pre-wrap"}catch(e){};h.textContent=f;
+b.scrollTop=h.scrollHeight;h.parentNode.removeChild(h)}else{var l='';var m=
+document.getElementsByTagName('link');for(var i=0;i<m.length;++i){var n=m[i];
+if(n.getAttribute('type')==='application/wiki'){l=n.getAttribute('href')}}if(
+l===''){return}var o=['p','h1','h2','h3','h4','h5','h6','pre','ul','div',
+'span'];for(var j=0;j<o.length;++j){var m=document.getElementsByTagName(o[j]);
+for(var i=0;i<m.length;++i){var n=m[i];if(n.id&&n.id.match(/^line_\d+$/)){
+n.ondblclick=function(){var a=l+'#'+this.id.replace('line_','');
+document.location.href=a}}}}}}
+window.onload=function(){hatta_dates();hatta_edit()}"""
+
+style = """\
+html { background: #fff; color: #2e3436;
+ font-family: sans-serif; font-size: 96% }
+body { margin: 1em auto; line-height: 1.3; width: 40em }
+a { color: #3465a4; text-decoration: none }
+a:hover { text-decoration: underline }
+a.wiki:visited { color: #204a87 }
+a.nonexistent, a.nonexistent:visited { color: #a40000; }
+a.external { color: #3465a4; text-decoration: underline }
+a.external:visited { color: #75507b }
+a img { border: none }
+img.math, img.smiley { vertical-align: middle }
+pre { font-size: 100%; white-space: pre-wrap; word-wrap: break-word;
+ white-space: -moz-pre-wrap; white-space: -pre-wrap;
+ white-space: -o-pre-wrap; line-height: 1.2; color: #555753 }
+div.conflict pre.local { background: #fcaf3e; margin-bottom: 0; color: 000}
+div.conflict pre.other { background: #ffdd66; margin-top: 0; color: 000; border-top: #d80 dashed 1px; }
+pre.diff div.orig { font-size: 75%; color: #babdb6 }
+b.highlight, pre.diff ins { font-weight: bold; background: #fcaf3e;
+color: #ce5c00; text-decoration: none }
+pre.diff del { background: #eeeeec; color: #888a85; text-decoration: none }
+pre.diff div.change { border-left: 2px solid #fcaf3e }
+div#hatta-footer { border-top: solid 1px #babdb6; text-align: right }
+h1, h2, h3, h4 { color: #babdb6; font-weight: normal; letter-spacing: 0.125em}
+div.buttons { text-align: center }
+input.button, div.buttons input { font-weight: bold; font-size: 100%;
+ background: #eee; border: solid 1px #babdb6; margin: 0.25em; color: #888a85}
+.history input.button { font-size: 75% }
+.editor textarea { width: 100%; display: block; font-size: 100%;
+ border: solid 1px #babdb6; }
+.editor label { display:block; text-align: right }
+.editor .upload { margin: 2em auto; text-align: center }
+form#hatta-search input#hatta-search, .editor label input { font-size: 100%;
+ border: solid 1px #babdb6; margin: 0.125em 0 }
+.editor label.comment input { width: 32em }
+a#hatta-logo { float: left; display: block; margin: 0.25em }
+div#hatta-header h1 { margin: 0; }
+div#hatta-content { clear: left }
+form#hatta-search { margin:0; text-align: right; font-size: 80% }
+div.snippet { font-size: 80%; color: #888a85 }
+div#hatta-header div#hatta-menu { float: right; margin-top: 1.25em }
+div#hatta-header div#hatta-menu a.current { color: #000 }
+hr { background: transparent; border:none; height: 0;
+ border-bottom: 1px solid #babdb6; clear: both }
+blockquote { border-left:.25em solid #ccc; padding-left:.5em; margin-left:0}
+abbr.date {border:none}
+dt {font-weight: bold; float: left; }
+dd {font-style: italic; }
+@media print {
+ body {background:white;color:black;font-size:100%;font-family:serif;}
+ #hatta-search, #hatta-menu, #hatta-footer {display:none;}
+ a:link, a:visited {color:#520;font-weight:bold;text-decoration:underline;}
+ #hatta-content {width:auto;}
+ #hatta-content a:link:after,
+ #hatta-content a:visited:after{content:" ["attr(href)"] ";font-size:90%;}
+}
+"""
+
diff --git a/websdk/hatta/error.py b/websdk/hatta/error.py
new file mode 100644
index 0000000..54edee2
--- /dev/null
+++ b/websdk/hatta/error.py
@@ -0,0 +1,40 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import werkzeug.exceptions
+
+
+class WikiError(werkzeug.exceptions.HTTPException):
+ """Base class for all error pages."""
+
+
+class BadRequest(WikiError):
+ code = 400
+
+
+class ForbiddenErr(WikiError):
+ code = 403
+
+
+class NotFoundErr(WikiError):
+ code = 404
+
+
+class RequestEntityTooLarge(WikiError):
+ code = 413
+
+
+class RequestURITooLarge(WikiError):
+ code = 414
+
+
+class UnsupportedMediaTypeErr(WikiError):
+ code = 415
+
+
+class NotImplementedErr(WikiError):
+ code = 501
+
+
+class ServiceUnavailableErr(WikiError):
+ code = 503
diff --git a/websdk/hatta/hg_integration.py b/websdk/hatta/hg_integration.py
new file mode 100644
index 0000000..9aa226d
--- /dev/null
+++ b/websdk/hatta/hg_integration.py
@@ -0,0 +1,24 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import os
+
+from config import WikiConfig
+from __main__ import main
+
+
+def run_wiki(ui, repo, directory=None, **opts):
+ """Start serving Hatta in the provided repository."""
+
+ config = WikiConfig()
+ config.set('pages_path', directory or os.path.join(repo.root, 'docs'))
+ ui.write('Starting wiki at http://127.0.0.1:8080\n')
+ main(config=config)
+
+cmdtable = {
+ 'wiki': (
+ run_wiki, [
+ ],
+ "hg wiki [options] directory",
+ ),
+}
diff --git a/websdk/hatta/page.py b/websdk/hatta/page.py
new file mode 100644
index 0000000..3992552
--- /dev/null
+++ b/websdk/hatta/page.py
@@ -0,0 +1,656 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import difflib
+import mimetypes
+import re
+import os
+
+import werkzeug
+import werkzeug.contrib.atom
+
+pygments = None
+try:
+ import pygments
+ import pygments.util
+ import pygments.lexers
+ import pygments.formatters
+ import pygments.styles
+except ImportError:
+ pass
+
+Image = None
+try:
+ import Image
+except ImportError:
+ pass
+
+import parser
+import error
+
+
+def page_mime(title):
+ """
+ Guess page's mime type ased on corresponding file name.
+ Default ot text/x-wiki for files without an extension.
+
+ >>> page_mime(u'something.txt')
+ 'text/plain'
+ >>> page_mime(u'SomePage')
+ 'text/x-wiki'
+ >>> page_mime(u'ąęśUnicodePage')
+ 'text/x-wiki'
+ >>> page_mime(u'image.png')
+ 'image/png'
+ >>> page_mime(u'style.css')
+ 'text/css'
+ >>> page_mime(u'archive.tar.gz')
+ 'archive/gzip'
+ """
+
+ addr = title.encode('utf-8') # the encoding doesn't relly matter here
+ mime, encoding = mimetypes.guess_type(addr, strict=False)
+ if encoding:
+ mime = 'archive/%s' % encoding
+ if mime is None:
+ mime = 'text/x-wiki'
+ return mime
+
+
+def date_html(date_time):
+ """
+ Create HTML for a date, according to recommendation at
+ http://microformats.org/wiki/date
+ """
+
+ return date_time.strftime(
+ '<abbr class="date" title="%Y-%m-%dT%H:%M:%SZ">%Y-%m-%d %H:%M</abbr>')
+
+
+class WikiPage(object):
+ """Everything needed for rendering a page."""
+
+ def __init__(self, wiki, request, title, mime):
+ self.request = request
+ self.title = title
+ self.mime = mime
+ # for now we just use the globals from wiki object
+ if request:
+ self.get_url = request.get_url
+ self.get_download_url = request.get_download_url
+ self.wiki = wiki
+ self.storage = self.wiki.storage
+ self.index = self.wiki.index
+ self.config = self.wiki.config
+ if self.wiki.alias_page and self.wiki.alias_page in self.storage:
+ self.aliases = dict(
+ self.index.page_links_and_labels(self.wiki.alias_page))
+ else:
+ self.aliases = {}
+
+ def link_alias(self, addr):
+ """Find a target address for an alias."""
+
+ try:
+ alias, target = addr.split(':', 1)
+ except ValueError:
+ return self.wiki.alias_page
+ try:
+ pattern = self.aliases[alias]
+ except KeyError:
+ return self.wiki.alias_page
+ try:
+ link = pattern % target
+ except TypeError:
+ link = pattern + target
+ return link
+
+ def wiki_link(self, addr, label=None, class_=None, image=None, lineno=0):
+ """Create HTML for a wiki link."""
+
+ addr = addr.strip()
+ text = werkzeug.escape(label or addr)
+ chunk = ''
+ if class_ is not None:
+ classes = [class_]
+ else:
+ classes = []
+ if parser.external_link(addr):
+ classes.append('external')
+ if addr.startswith('mailto:'):
+ # Obfuscate e-mails a little bit.
+ classes.append('mail')
+ text = text.replace('@', '&#64;').replace('.', '&#46;')
+ href = werkzeug.escape(addr,
+ quote=True).replace('@', '%40').replace('.', '%2E')
+ else:
+ href = werkzeug.escape(werkzeug.url_fix(addr), quote=True)
+ else:
+ if '#' in addr:
+ addr, chunk = addr.split('#', 1)
+ chunk = '#' + werkzeug.url_fix(chunk)
+ if addr.startswith(':'):
+ alias = self.link_alias(addr[1:])
+ href = werkzeug.escape(werkzeug.url_fix(alias) + chunk, True)
+ classes.append('external')
+ classes.append('alias')
+ elif addr.startswith('+'):
+ href = '/'.join([self.request.script_root,
+ '+' + werkzeug.escape(addr[1:], quote=True)])
+ classes.append('special')
+ elif addr == u'':
+ href = werkzeug.escape(chunk, True)
+ classes.append('anchor')
+ else:
+ classes.append('wiki')
+ href = werkzeug.escape(self.get_url(addr) + chunk, True)
+ if addr not in self.storage:
+ classes.append('nonexistent')
+ class_ = werkzeug.escape(' '.join(classes) or '', True)
+ # We need to output HTML on our own to prevent escaping of href
+ return '<a href="%s" class="%s" title="%s">%s</a>' % (
+ href, class_, werkzeug.escape(addr + chunk, True),
+ image or text)
+
+ def wiki_image(self, addr, alt, class_='wiki', lineno=0):
+ """Create HTML for a wiki image."""
+
+ addr = addr.strip()
+ html = werkzeug.html
+ chunk = ''
+ if parser.external_link(addr):
+ return html.img(src=werkzeug.url_fix(addr), class_="external",
+ alt=alt)
+ if '#' in addr:
+ addr, chunk = addr.split('#', 1)
+ if addr == '':
+ return html.a(name=chunk)
+ elif addr.startswith(':'):
+ if chunk:
+ chunk = '#' + chunk
+ alias = self.link_alias(addr[1:])
+ href = werkzeug.url_fix(alias + chunk)
+ return html.img(src=href, class_="external alias", alt=alt)
+ elif addr in self.storage:
+ mime = page_mime(addr)
+ if mime.startswith('image/'):
+ return html.img(src=self.get_download_url(addr), class_=class_,
+ alt=alt)
+ else:
+ return html.img(href=self.get_download_url(addr), alt=alt)
+ else:
+ return html.a(html(alt), href=self.get_url(addr))
+
+ def menu(self):
+ """Generate the menu items"""
+ _ = self.wiki.gettext
+ if self.wiki.menu_page in self.storage:
+ items = self.index.page_links_and_labels(self.wiki.menu_page)
+ else:
+ items = [
+ (self.wiki.front_page, self.wiki.front_page),
+ ('+history', _(u'Recent changes')),
+ ]
+ for link, label in items:
+ if link == self.title:
+ class_ = "current"
+ else:
+ class_ = None
+ yield self.wiki_link(link, label, class_=class_)
+
+ def template(self, template_name, **kwargs):
+ template = self.wiki.template_env.get_template(template_name)
+ edit_url = None
+ if self.title:
+ try:
+ self.wiki._check_lock(self.title)
+ edit_url = self.get_url(self.title, self.wiki.edit)
+ except error.ForbiddenErr:
+ pass
+ context = {
+ 'request': self.request,
+ 'wiki': self.wiki,
+ 'title': self.title,
+ 'mime': self.mime,
+ 'url': self.get_url,
+ 'download_url': self.get_download_url,
+ 'config': self.config,
+ 'page': self,
+ 'edit_url': edit_url,
+ }
+ context.update(kwargs)
+ stream = template.stream(**context)
+ stream.enable_buffering(10)
+ return stream
+
+ def dependencies(self):
+ """Refresh the page when any of those pages was changed."""
+
+ dependencies = set()
+ for title in [self.wiki.logo_page, self.wiki.menu_page]:
+ if title not in self.storage:
+ dependencies.add(werkzeug.url_quote(title))
+ for title in [self.wiki.menu_page]:
+ if title in self.storage:
+ inode, size, mtime = self.storage.page_file_meta(title)
+ etag = '%s/%d-%d' % (werkzeug.url_quote(title), inode, mtime)
+ dependencies.add(etag)
+ return dependencies
+
+ def render_editor(self, preview=None):
+ _ = self.wiki.gettext
+ author = self.request.get_author()
+ if self.title in self.storage:
+ comment = _(u'changed')
+ (rev, old_date, old_author,
+ old_comment) = self.storage.page_meta(self.title)
+ if old_author == author:
+ comment = old_comment
+ else:
+ comment = _(u'uploaded')
+ rev = -1
+ return self.template('edit_file.html', comment=comment,
+ author=author, parent=rev)
+
+
+class WikiPageSpecial(WikiPage):
+ """Special pages, like recent changes, index, etc."""
+
+
+class WikiPageText(WikiPage):
+ """Pages of mime type text/* use this for display."""
+
+ def content_iter(self, lines):
+ yield '<pre>'
+ for line in lines:
+ yield werkzeug.html(line)
+ yield '</pre>'
+
+ def plain_text(self):
+ """
+ Get the content of the page with all markup removed, used for
+ indexing.
+ """
+
+ return self.storage.page_text(self.title)
+
+ def view_content(self, lines=None):
+ """
+ Read the page content from storage or preview and return iterator.
+ """
+
+ if lines is None:
+ f = self.storage.open_page(self.title)
+ lines = self.storage.page_lines(f)
+ return self.content_iter(lines)
+
+ def render_editor(self, preview=None):
+ """Generate the HTML for the editor."""
+
+ _ = self.wiki.gettext
+ author = self.request.get_author()
+ lines = []
+ try:
+ page_file = self.storage.open_page(self.title)
+ lines = self.storage.page_lines(page_file)
+ (rev, old_date, old_author,
+ old_comment) = self.storage.page_meta(self.title)
+ comment = _(u'modified')
+ if old_author == author:
+ comment = old_comment
+ except error.NotFoundErr:
+ comment = _(u'created')
+ rev = -1
+ except error.ForbiddenErr, e:
+ return werkzeug.html.p(werkzeug.html(_(unicode(e))))
+ if preview:
+ lines = preview
+ comment = self.request.form.get('comment', comment)
+ return self.template('edit_text.html', comment=comment,
+ preview=preview,
+ author=author, parent=rev, lines=lines)
+
+ def diff_content(self, from_text, to_text, message=u''):
+ """Generate the HTML markup for a diff."""
+
+ def infiniter(iterator):
+ """Turn an iterator into an infinite one, padding it with None"""
+
+ for i in iterator:
+ yield i
+ while True:
+ yield None
+
+ diff = difflib._mdiff(from_text.split('\n'), to_text.split('\n'))
+ mark_re = re.compile('\0[-+^]([^\1\0]*)\1|([^\0\1])')
+ yield message
+ yield u'<pre class="diff">'
+ for old_line, new_line, changed in diff:
+ old_no, old_text = old_line
+ new_no, new_text = new_line
+ line_no = (new_no or old_no or 1) - 1
+ if changed:
+ yield u'<div class="change" id="line_%d">' % line_no
+ old_iter = infiniter(mark_re.finditer(old_text))
+ new_iter = infiniter(mark_re.finditer(new_text))
+ old = old_iter.next()
+ new = new_iter.next()
+ buff = u''
+ while old or new:
+ while old and old.group(1):
+ if buff:
+ yield werkzeug.escape(buff)
+ buff = u''
+ yield u'<del>%s</del>' % werkzeug.escape(old.group(1))
+ old = old_iter.next()
+ while new and new.group(1):
+ if buff:
+ yield werkzeug.escape(buff)
+ buff = u''
+ yield u'<ins>%s</ins>' % werkzeug.escape(new.group(1))
+ new = new_iter.next()
+ if new:
+ buff += new.group(2)
+ old = old_iter.next()
+ new = new_iter.next()
+ if buff:
+ yield werkzeug.escape(buff)
+ yield u'</div>'
+ else:
+ yield u'<div class="orig" id="line_%d">%s</div>' % (
+ line_no, werkzeug.escape(old_text))
+ yield u'</pre>'
+
+
+class WikiPageColorText(WikiPageText):
+ """Text pages, but displayed colorized with pygments"""
+
+ def view_content(self, lines=None):
+ """Generate HTML for the content."""
+
+ if lines is None:
+ text = self.storage.page_text(self.title)
+ else:
+ text = ''.join(lines)
+ return self.highlight(text, mime=self.mime)
+
+ def highlight(self, text, mime=None, syntax=None, line_no=0):
+ """Colorize the source code."""
+
+ if pygments is None:
+ yield werkzeug.html.pre(werkzeug.html(text))
+ return
+
+ formatter = pygments.formatters.HtmlFormatter()
+ formatter.line_no = line_no
+
+ def wrapper(source, unused_outfile):
+ """Wrap each line of formatted output."""
+
+ yield 0, '<div class="highlight"><pre>'
+ for lineno, line in source:
+ yield (lineno,
+ werkzeug.html.span(line, id_="line_%d" %
+ formatter.line_no))
+ formatter.line_no += 1
+ yield 0, '</pre></div>'
+
+ formatter.wrap = wrapper
+ try:
+ if mime:
+ lexer = pygments.lexers.get_lexer_for_mimetype(mime)
+ elif syntax:
+ lexer = pygments.lexers.get_lexer_by_name(syntax)
+ else:
+ lexer = pygments.lexers.guess_lexer(text)
+ except pygments.util.ClassNotFoundErr:
+ yield werkzeug.html.pre(werkzeug.html(text))
+ return
+ html = pygments.highlight(text, lexer, formatter)
+ yield html
+
+
+class WikiPageWiki(WikiPageColorText):
+ """Pages of with wiki markup use this for display."""
+
+ def __init__(self, *args, **kw):
+ super(WikiPageWiki, self).__init__(*args, **kw)
+ if self.config.get_bool('wiki_words', False):
+ self.parser = parser.WikiWikiParser
+ else:
+ self.parser = parser.WikiParser
+ if self.config.get_bool('ignore_indent', False):
+ try:
+ del self.parser.block['indent']
+ except KeyError:
+ pass
+
+ def extract_links(self, text=None):
+ """Extract all links from the page."""
+
+ if text is None:
+ try:
+ text = self.storage.page_text(self.title)
+ except error.NotFoundErr:
+ text = u''
+ return self.parser.extract_links(text)
+
+ def view_content(self, lines=None):
+ if lines is None:
+ f = self.storage.open_page(self.title)
+ lines = self.storage.page_lines(f)
+ if self.wiki.icon_page and self.wiki.icon_page in self.storage:
+ icons = self.index.page_links_and_labels(self.wiki.icon_page)
+ smilies = dict((emo, link) for (link, emo) in icons)
+ else:
+ smilies = None
+ content = self.parser(lines, self.wiki_link, self.wiki_image,
+ self.highlight, self.wiki_math, smilies)
+ return content
+
+ def wiki_math(self, math):
+ math_url = self.config.get('math_url',
+ 'http://www.mathtran.org/cgi-bin/mathtran?tex=')
+ if '%s' in math_url:
+ url = math_url % werkzeug.url_quote(math)
+ else:
+ url = '%s%s' % (math_url, werkzeug.url_quote(math))
+ label = werkzeug.escape(math, quote=True)
+ return werkzeug.html.img(src=url, alt=label, class_="math")
+
+ def dependencies(self):
+ dependencies = WikiPage.dependencies(self)
+ for title in [self.wiki.icon_page, self.wiki.alias_page]:
+ if title in self.storage:
+ inode, size, mtime = self.storage.page_file_meta(title)
+ etag = '%s/%d-%d' % (werkzeug.url_quote(title), inode, mtime)
+ dependencies.add(etag)
+ for link in self.index.page_links(self.title):
+ if link not in self.storage:
+ dependencies.add(werkzeug.url_quote(link))
+ return dependencies
+
+
+class WikiPageFile(WikiPage):
+ """Pages of all other mime types use this for display."""
+
+ def view_content(self, lines=None):
+ if self.title not in self.storage:
+ raise error.NotFoundErr()
+ content = ['<p>Download <a href="%s">%s</a> as <i>%s</i>.</p>' %
+ (self.request.get_download_url(self.title),
+ werkzeug.escape(self.title), self.mime)]
+ return content
+
+
+class WikiPageImage(WikiPageFile):
+ """Pages of mime type image/* use this for display."""
+
+ render_file = '128x128.png'
+
+ def view_content(self, lines=None):
+ if self.title not in self.storage:
+ raise error.NotFoundErr()
+ content = ['<img src="%s" alt="%s">'
+ % (self.request.get_url(self.title, self.wiki.render),
+ werkzeug.escape(self.title))]
+ return content
+
+ def render_mime(self):
+ """Give the filename and mime type of the rendered thumbnail."""
+
+ if not Image:
+ raise NotImplementedError('No Image library available')
+ return self.render_file, 'image/png'
+
+ def render_cache(self, cache_dir):
+ """Render the thumbnail and save in the cache."""
+
+ if not Image:
+ raise NotImplementedError('No Image library available')
+ page_file = self.storage.open_page(self.title)
+ cache_path = os.path.join(cache_dir, self.render_file)
+ cache_file = open(cache_path, 'wb')
+ try:
+ im = Image.open(page_file)
+ im = im.convert('RGBA')
+ im.thumbnail((128, 128), Image.ANTIALIAS)
+ im.save(cache_file, 'PNG')
+ except IOError:
+ raise error.UnsupportedMediaTypeErr('Image corrupted')
+ cache_file.close()
+ return cache_path
+
+
+class WikiPageCSV(WikiPageFile):
+ """Display class for type text/csv."""
+
+ def content_iter(self, lines=None):
+ import csv
+ _ = self.wiki.gettext
+ # XXX Add preview support
+ csv_file = self.storage.open_page(self.title)
+ reader = csv.reader(csv_file)
+ html_title = werkzeug.escape(self.title, quote=True)
+ yield u'<table id="%s" class="csvfile">' % html_title
+ try:
+ for row in reader:
+ yield u'<tr>%s</tr>' % (u''.join(u'<td>%s</td>' % cell
+ for cell in row))
+ except csv.Error, e:
+ yield u'</table>'
+ yield werkzeug.html.p(werkzeug.html(
+ _(u'Error parsing CSV file %{file}s on'
+ u'line %{line}d: %{error}s') %
+ {'file': html_title, 'line': reader.line_num, 'error': e}))
+ finally:
+ csv_file.close()
+ yield u'</table>'
+
+ def view_content(self, lines=None):
+ if self.title not in self.storage:
+ raise error.NotFoundErr()
+ return self.content_iter(lines)
+
+
+class WikiPageRST(WikiPageText):
+ """
+ Display ReStructured Text.
+ """
+
+ def content_iter(self, lines):
+ try:
+ from docutils.core import publish_parts
+ except ImportError:
+ return super(WikiPageRST, self).content_iter(lines)
+ text = ''.join(lines)
+ SAFE_DOCUTILS = dict(file_insertion_enabled=False, raw_enabled=False)
+ content = publish_parts(text, writer_name='html',
+ settings_overrides=SAFE_DOCUTILS)['html_body']
+ return [content]
+
+
+class WikiPageBugs(WikiPageText):
+ """
+ Display class for type text/x-bugs
+ Parse the ISSUES file in (roughly) format used by ciss
+ """
+
+ def content_iter(self, lines):
+ last_lines = []
+ in_header = False
+ in_bug = False
+ attributes = {}
+ title = None
+ for line_no, line in enumerate(lines):
+ if last_lines and line.startswith('----'):
+ title = ''.join(last_lines)
+ last_lines = []
+ in_header = True
+ attributes = {}
+ elif in_header and ':' in line:
+ attribute, value = line.split(':', 1)
+ attributes[attribute.strip()] = value.strip()
+ else:
+ if in_header:
+ if in_bug:
+ yield '</div>'
+ #tags = [tag.strip() for tag in
+ # attributes.get('tags', '').split()
+ # if tag.strip()]
+ yield '<div id="line_%d">' % (line_no)
+ in_bug = True
+ if title:
+ yield werkzeug.html.h2(werkzeug.html(title))
+ if attributes:
+ yield '<dl>'
+ for attribute, value in attributes.iteritems():
+ yield werkzeug.html.dt(werkzeug.html(attribute))
+ yield werkzeug.html.dd(werkzeug.html(value))
+ yield '</dl>'
+ in_header = False
+ if not line.strip():
+ if last_lines:
+ if last_lines[0][0] in ' \t':
+ yield werkzeug.html.pre(werkzeug.html(
+ ''.join(last_lines)))
+ else:
+ yield werkzeug.html.p(werkzeug.html(
+ ''.join(last_lines)))
+ last_lines = []
+ else:
+ last_lines.append(line)
+ if last_lines:
+ if last_lines[0][0] in ' \t':
+ yield werkzeug.html.pre(werkzeug.html(
+ ''.join(last_lines)))
+ else:
+ yield werkzeug.html.p(werkzeug.html(
+ ''.join(last_lines)))
+ if in_bug:
+ yield '</div>'
+
+filename_map = {
+ 'README': (WikiPageText, 'text/plain'),
+ 'ISSUES': (WikiPageBugs, 'text/x-bugs'),
+ 'ISSUES.txt': (WikiPageBugs, 'text/x-bugs'),
+ 'COPYING': (WikiPageText, 'text/plain'),
+ 'CHANGES': (WikiPageText, 'text/plain'),
+ 'MANIFEST': (WikiPageText, 'text/plain'),
+ 'favicon.ico': (WikiPageImage, 'image/x-icon'),
+}
+
+mime_map = {
+ 'text': WikiPageColorText,
+ 'application/x-javascript': WikiPageColorText,
+ 'application/x-python': WikiPageColorText,
+ 'text/csv': WikiPageCSV,
+ 'text/x-rst': WikiPageRST,
+ 'text/x-wiki': WikiPageWiki,
+ 'image': WikiPageImage,
+ '': WikiPageFile,
+}
+
+mimetypes.add_type('application/x-python', '.wsgi')
+mimetypes.add_type('application/x-javascript', '.js')
+mimetypes.add_type('text/x-rst', '.rst')
diff --git a/websdk/hatta/parser.py b/websdk/hatta/parser.py
new file mode 100644
index 0000000..a76fa78
--- /dev/null
+++ b/websdk/hatta/parser.py
@@ -0,0 +1,529 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import re
+import sys
+import unicodedata
+import itertools
+import werkzeug
+
+
+EXTERNAL_URL_RE = re.compile(ur'^[a-z]+://|^mailto:', re.I | re.U)
+
+
+def external_link(addr):
+ """
+ Decide whether a link is absolute or internal.
+
+ >>> external_link('http://example.com')
+ True
+ >>> external_link('https://example.com')
+ True
+ >>> external_link('ftp://example.com')
+ True
+ >>> external_link('mailto:user@example.com')
+ True
+ >>> external_link('PageTitle')
+ False
+ >>> external_link(u'ąęśćUnicodePage')
+ False
+
+ """
+
+ return EXTERNAL_URL_RE.match(addr)
+
+
+class WikiParser(object):
+ r"""
+ Responsible for generating HTML markup from the wiki markup.
+
+ The parser works on two levels. On the block level, it analyzes lines
+ of text and decides what kind of block element they belong to (block
+ elements include paragraphs, lists, headings, preformatted blocks).
+ Lines belonging to the same block are joined together, and a second
+ pass is made using regular expressions to parse line-level elements,
+ such as links, bold and italic text and smileys.
+
+ Some block-level elements, such as preformatted blocks, consume additional
+ lines from the input until they encounter the end-of-block marker, using
+ lines_until. Most block-level elements are just runs of marked up lines
+ though.
+
+
+ """
+
+ list_pat = ur"^\s*[*#]+\s+"
+ heading_pat = ur"^\s*=+"
+ quote_pat = ur"^[>]+\s+"
+ block = {
+ # "name": (priority, ur"pattern"),
+ "list": (10, list_pat),
+ "code": (20, ur"^[{][{][{]+\s*$"),
+ "conflict": (30, ur"^<<<<<<< local\s*$"),
+ "empty": (40, ur"^\s*$"),
+ "heading": (50, heading_pat),
+ "indent": (60, ur"^[ \t]+"),
+ "macro": (70, ur"^<<\w+\s*$"),
+ "quote": (80, quote_pat),
+ "rule": (90, ur"^\s*---+\s*$"),
+ "syntax": (100, ur"^\{\{\{\#![\w+#.-]+\s*$"),
+ "table": (110, ur"^\|"),
+ }
+ image_pat = (ur"\{\{(?P<image_target>([^|}]|}[^|}])*)"
+ ur"(\|(?P<image_text>([^}]|}[^}])*))?}}")
+ smilies = {
+ r':)': "smile.png",
+ r':(': "frown.png",
+ r':P': "tongue.png",
+ r':D': "grin.png",
+ r';)': "wink.png",
+ }
+ punct = {
+ r'...': "&hellip;",
+ r'--': "&ndash;",
+ r'---': "&mdash;",
+ r'~': "&nbsp;",
+ r'\~': "~",
+ r'~~': "&sim;",
+ r'(C)': "&copy;",
+ r'-->': "&rarr;",
+ r'<--': "&larr;",
+ r'(R)': "&reg;",
+ r'(TM)': "&trade;",
+ r'%%': "&permil;",
+ r'``': "&ldquo;",
+ r"''": "&rdquo;",
+ r",,": "&bdquo;",
+ }
+ markup = {
+ # "name": (priority, ur"pattern"),
+ "bold": (10, ur"[*][*]"),
+ "code": (20, ur"[{][{][{](?P<code_text>([^}]|[^}][}]|[^}][}][}])"
+ ur"*[}]*)[}][}][}]"),
+ "free_link": (30, ur"""[a-zA-Z]+://\S+[^\s.,:;!?()'"\*/=+<>-]"""),
+ "italic": (40, ur"//"),
+ "link": (50, ur"\[\[(?P<link_target>([^|\]]|\][^|\]])+)"
+ ur"(\|(?P<link_text>([^\]]|\][^\]])+))?\]\]"),
+ "image": (60, image_pat),
+ "linebreak": (70, ur"\\\\"),
+ "macro": (80, ur"[<][<](?P<macro_name>\w+)\s+"
+ ur"(?P<macro_text>([^>]|[^>][>])+)[>][>]"),
+ "mail": (90, ur"""(mailto:)?\S+@\S+(\.[^\s.,:;!?()'"\*/=+<>-]+)+"""),
+ "math": (100, ur"\$\$(?P<math_text>[^$]+)\$\$"),
+ "mono": (110, ur"##"),
+ "newline": (120, ur"\n"),
+ "punct": (130,
+ ur'(^|\b|(?<=\s))(%s)((?=[\s.,:;!?)/&=+"\'—-])|\b|$)' %
+ ur"|".join(re.escape(k) for k in punct)),
+ "table": (140, ur"=?\|=?"),
+ "text": (150, ur".+?"),
+ }
+
+ def __init__(self, lines, wiki_link, wiki_image,
+ wiki_syntax=None, wiki_math=None, smilies=None):
+ self.wiki_link = wiki_link
+ self.wiki_image = wiki_image
+ self.wiki_syntax = wiki_syntax
+ self.wiki_math = wiki_math
+ self.enumerated_lines = enumerate(lines)
+ if smilies is not None:
+ self.smilies = smilies
+ self.compile_patterns()
+ self.headings = {}
+ self.stack = []
+ self.line_no = 0
+
+ def compile_patterns(self):
+ self.quote_re = re.compile(self.quote_pat, re.U)
+ self.heading_re = re.compile(self.heading_pat, re.U)
+ self.list_re = re.compile(self.list_pat, re.U)
+ patterns = ((k, p) for (k, (x, p)) in
+ sorted(self.block.iteritems(), key=lambda x: x[1][0]))
+ self.block_re = re.compile(ur"|".join("(?P<%s>%s)" % pat
+ for pat in patterns), re.U)
+ self.code_close_re = re.compile(ur"^\}\}\}\s*$", re.U)
+ self.macro_close_re = re.compile(ur"^>>\s*$", re.U)
+ self.conflict_close_re = re.compile(ur"^>>>>>>> other\s*$", re.U)
+ self.conflict_sep_re = re.compile(ur"^=======\s*$", re.U)
+ self.image_re = re.compile(self.image_pat, re.U)
+ smileys = ur"|".join(re.escape(k) for k in self.smilies)
+ smiley_pat = (ur"(^|\b|(?<=\s))(?P<smiley_face>%s)"
+ ur"((?=[\s.,:;!?)/&=+-])|$)" % smileys)
+ self.markup['smiley'] = (125, smiley_pat)
+ patterns = ((k, p) for (k, (x, p)) in
+ sorted(self.markup.iteritems(), key=lambda x: x[1][0]))
+ self.markup_re = re.compile(ur"|".join("(?P<%s>%s)" % pat
+ for pat in patterns), re.U)
+
+ def __iter__(self):
+ return self.parse()
+
+ @classmethod
+ def extract_links(cls, text):
+ links = []
+
+ def link(addr, label=None, class_=None, image=None, alt=None,
+ lineno=0):
+ addr = addr.strip()
+ if external_link(addr):
+ # Don't index external links
+ return u''
+ if '#' in addr:
+ addr, chunk = addr.split('#', 1)
+ if addr == u'':
+ return u''
+ links.append((addr, label))
+ return u''
+ lines = text.split('\n')
+ for part in cls(lines, link, link):
+ for ret in links:
+ yield ret
+ links[:] = []
+
+ def parse(self):
+ """Parse a list of lines of wiki markup, yielding HTML for it."""
+
+ self.headings = {}
+ self.stack = []
+ self.line_no = 0
+
+ def key(enumerated_line):
+ line_no, line = enumerated_line
+ match = self.block_re.match(line)
+ if match:
+ return match.lastgroup
+ return "paragraph"
+
+ for kind, block in itertools.groupby(self.enumerated_lines, key):
+ func = getattr(self, "_block_%s" % kind)
+ for part in func(block):
+ yield part
+
+ def parse_line(self, line):
+ """
+ Find all the line-level markup and return HTML for it.
+
+ """
+
+ for match in self.markup_re.finditer(line):
+ func = getattr(self, "_line_%s" % match.lastgroup)
+ yield func(match.groupdict())
+
+ def pop_to(self, stop):
+ """
+ Pop from the stack until the specified tag is encoutered.
+ Return string containing closing tags of everything popped.
+ """
+ tags = []
+ tag = None
+ try:
+ while tag != stop:
+ tag = self.stack.pop()
+ tags.append(tag)
+ except IndexError:
+ pass
+ return u"".join(u"</%s>" % tag for tag in tags)
+
+ def lines_until(self, close_re):
+ """Get lines from input until the closing markup is encountered."""
+
+ self.line_no, line = self.enumerated_lines.next()
+ while not close_re.match(line):
+ yield line.rstrip()
+ line_no, line = self.enumerated_lines.next()
+
+# methods for the markup inside lines:
+
+ def _line_table(self, groups):
+ return groups["table"]
+
+ def _line_linebreak(self, groups):
+ return u'<br>'
+
+ def _line_smiley(self, groups):
+ smiley = groups["smiley_face"]
+ try:
+ url = self.smilies[smiley]
+ except KeyError:
+ url = ''
+ return self.wiki_image(url, smiley, class_="smiley")
+
+ def _line_bold(self, groups):
+ if 'b' in self.stack:
+ return self.pop_to('b')
+ else:
+ self.stack.append('b')
+ return u"<b>"
+
+ def _line_italic(self, groups):
+ if 'i' in self.stack:
+ return self.pop_to('i')
+ else:
+ self.stack.append('i')
+ return u"<i>"
+
+ def _line_mono(self, groups):
+ if 'tt' in self.stack:
+ return self.pop_to('tt')
+ else:
+ self.stack.append('tt')
+ return u"<tt>"
+
+ def _line_punct(self, groups):
+ text = groups["punct"]
+ return self.punct.get(text, text)
+
+ def _line_newline(self, groups):
+ return "\n"
+
+ def _line_text(self, groups):
+ return werkzeug.escape(groups["text"])
+
+ def _line_math(self, groups):
+ if self.wiki_math:
+ return self.wiki_math(groups["math_text"])
+ else:
+ return "<var>%s</var>" % werkzeug.escape(groups["math_text"])
+
+ def _line_code(self, groups):
+ return u'<code>%s</code>' % werkzeug.escape(groups["code_text"])
+
+ def _line_free_link(self, groups):
+ groups['link_target'] = groups['free_link']
+ return self._line_link(groups)
+
+ def _line_mail(self, groups):
+ addr = groups['mail']
+ groups['link_text'] = addr
+ if not addr.startswith(u'mailto:'):
+ addr = u'mailto:%s' % addr
+ groups['link_target'] = addr
+ return self._line_link(groups)
+
+ def _line_link(self, groups):
+ target = groups['link_target']
+ text = groups.get('link_text')
+ if not text:
+ text = target
+ if '#' in text:
+ text, chunk = text.split('#', 1)
+ match = self.image_re.match(text)
+ if match:
+ image = self._line_image(match.groupdict())
+ return self.wiki_link(target, text, image=image)
+ return self.wiki_link(target, text)
+
+ def _line_image(self, groups):
+ target = groups['image_target']
+ alt = groups.get('image_text')
+ if alt is None:
+ alt = target
+ return self.wiki_image(target, alt)
+
+ def _line_macro(self, groups):
+ name = groups['macro_name']
+ text = groups['macro_text'].strip()
+ return u'<span class="%s">%s</span>' % (
+ werkzeug.escape(name, quote=True),
+ werkzeug.escape(text))
+
+# methods for the block (multiline) markup:
+
+ def _block_code(self, block):
+ for self.line_no, part in block:
+ inside = u"\n".join(self.lines_until(self.code_close_re))
+ yield werkzeug.html.pre(werkzeug.html(inside), class_="code",
+ id="line_%d" % self.line_no)
+
+ def _block_syntax(self, block):
+ for self.line_no, part in block:
+ syntax = part.lstrip('{#!').strip()
+ inside = u"\n".join(self.lines_until(self.code_close_re))
+ if self.wiki_syntax:
+ return self.wiki_syntax(inside, syntax=syntax,
+ line_no=self.line_no)
+ else:
+ return [werkzeug.html.div(werkzeug.html.pre(
+ werkzeug.html(inside), id="line_%d" % self.line_no),
+ class_="highlight")]
+
+ def _block_macro(self, block):
+ for self.line_no, part in block:
+ name = part.lstrip('<').strip()
+ inside = u"\n".join(self.lines_until(self.macro_close_re))
+ yield u'<div class="%s">%s</div>' % (
+ werkzeug.escape(name, quote=True),
+ werkzeug.escape(inside))
+
+ def _block_paragraph(self, block):
+ parts = []
+ first_line = None
+ for self.line_no, part in block:
+ if first_line is None:
+ first_line = self.line_no
+ parts.append(part)
+ text = u"".join(self.parse_line(u"".join(parts)))
+ yield werkzeug.html.p(text, self.pop_to(""), id="line_%d" % first_line)
+
+ def _block_indent(self, block):
+ parts = []
+ first_line = None
+ for self.line_no, part in block:
+ if first_line is None:
+ first_line = self.line_no
+ parts.append(part.rstrip())
+ text = u"\n".join(parts)
+ yield werkzeug.html.pre(werkzeug.html(text), id="line_%d" % first_line)
+
+ def _block_table(self, block):
+ first_line = None
+ in_head = False
+ for self.line_no, line in block:
+ if first_line is None:
+ first_line = self.line_no
+ yield u'<table id="line_%d">' % first_line
+ table_row = line.strip()
+ is_header = table_row.startswith('|=') and table_row.endswith('=|')
+ if not in_head and is_header:
+ in_head = True
+ yield '<thead>'
+ elif in_head and not is_header:
+ in_head = False
+ yield '</thead>'
+ yield '<tr>'
+ in_cell = False
+ in_th = False
+
+ for part in self.parse_line(table_row):
+ if part in ('=|', '|', '=|=', '|='):
+ if in_cell:
+ if in_th:
+ yield '</th>'
+ else:
+ yield '</td>'
+ in_cell = False
+ if part in ('=|=', '|='):
+ in_th = True
+ else:
+ in_th = False
+ else:
+ if not in_cell:
+ if in_th:
+ yield '<th>'
+ else:
+ yield '<td>'
+ in_cell = True
+ yield part
+ if in_cell:
+ if in_th:
+ yield '</th>'
+ else:
+ yield '</td>'
+ yield '</tr>'
+ yield u'</table>'
+
+ def _block_empty(self, block):
+ yield u''
+
+ def _block_rule(self, block):
+ for self.line_no, line in block:
+ yield werkzeug.html.hr()
+
+ def _block_heading(self, block):
+ for self.line_no, line in block:
+ level = min(len(self.heading_re.match(line).group(0).strip()), 5)
+ self.headings[level - 1] = self.headings.get(level - 1, 0) + 1
+ label = u"-".join(str(self.headings.get(i, 0))
+ for i in range(level))
+ yield werkzeug.html.a(name="head-%s" % label)
+ yield u'<h%d id="line_%d">%s</h%d>' % (level, self.line_no,
+ werkzeug.escape(line.strip("= \t\n\r\v")), level)
+
+ def _block_list(self, block):
+ level = 0
+ in_ul = False
+ kind = None
+ for self.line_no, line in block:
+ bullets = self.list_re.match(line).group(0).strip()
+ nest = len(bullets)
+ if kind is None:
+ if bullets.startswith('*'):
+ kind = 'ul'
+ else:
+ kind = 'ol'
+ while nest > level:
+ if in_ul:
+ yield '<li>'
+ yield '<%s id="line_%d">' % (kind, self.line_no)
+ in_ul = True
+ level += 1
+ while nest < level:
+ yield '</li></%s>' % kind
+ in_ul = False
+ level -= 1
+ if nest == level and not in_ul:
+ yield '</li>'
+ content = line.lstrip().lstrip('*#').strip()
+ yield '<li>%s%s' % (u"".join(self.parse_line(content)),
+ self.pop_to(""))
+ in_ul = False
+ yield ('</li></%s>' % kind) * level
+
+ def _block_quote(self, block):
+ level = 0
+ in_p = False
+ for self.line_no, line in block:
+ nest = len(self.quote_re.match(line).group(0).strip())
+ if nest == level:
+ yield u'\n'
+ while nest > level:
+ if in_p:
+ yield '%s</p>' % self.pop_to("")
+ in_p = False
+ yield '<blockquote>'
+ level += 1
+ while nest < level:
+ if in_p:
+ yield '%s</p>' % self.pop_to("")
+ in_p = False
+ yield '</blockquote>'
+ level -= 1
+ content = line.lstrip().lstrip('>').strip()
+ if not in_p:
+ yield '<p id="line_%d">' % self.line_no
+ in_p = True
+ yield u"".join(self.parse_line(content))
+ if in_p:
+ yield '%s</p>' % self.pop_to("")
+ yield '</blockquote>' * level
+
+ def _block_conflict(self, block):
+ for self.line_no, part in block:
+ yield u'<div class="conflict">'
+ local = u"\n".join(self.lines_until(self.conflict_sep_re))
+ yield werkzeug.html.pre(werkzeug.html(local),
+ class_="local",
+ id="line_%d" % self.line_no)
+ other = u"\n".join(self.lines_until(self.conflict_close_re))
+ yield werkzeug.html.pre(werkzeug.html(other),
+ class_="other",
+ id="line_%d" % self.line_no)
+ yield u'</div>'
+
+
+class WikiWikiParser(WikiParser):
+ """A version of WikiParser that recognizes WikiWord links."""
+
+ markup = dict(WikiParser.markup)
+ camel_link = ur"\w+[%s]\w+" % re.escape(
+ u''.join(unichr(i) for i in xrange(sys.maxunicode)
+ if unicodedata.category(unichr(i)) == 'Lu'))
+ markup["camel_link"] = (105, camel_link)
+ markup["camel_nolink"] = (106, ur"[!~](?P<camel_text>%s)" % camel_link)
+
+ def _line_camel_link(self, groups):
+ groups['link_target'] = groups['camel_link']
+ return self._line_link(groups)
+
+ def _line_camel_nolink(self, groups):
+ return werkzeug.escape(groups["camel_text"])
diff --git a/websdk/hatta/search.py b/websdk/hatta/search.py
new file mode 100644
index 0000000..2d8ae69
--- /dev/null
+++ b/websdk/hatta/search.py
@@ -0,0 +1,317 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import sqlite3
+import re
+import os
+import thread
+
+import error
+
+
+class WikiSearch(object):
+ """
+ Responsible for indexing words and links, for fast searching and
+ backlinks. Uses a cache directory to store the index files.
+ """
+
+ word_pattern = re.compile(ur"""\w[-~&\w]+\w""", re.UNICODE)
+ jword_pattern = re.compile(
+ur"""[ヲ-゚]+|[ぁ-ん~ー]+|[ァ-ヶ~ー]+|[0-9A-Za-z]+|"""
+ur"""[0-9A-Za-zΑ-Ωα-ωА-я]+|"""
+ur"""[^- !"#$%&'()*+,./:;<=>?@\[\\\]^_`{|}"""
+ur"""‾。「」、・ 、。,.・:;?!゛゜´`¨"""
+ur"""^ ̄_/〜‖|…‥‘’“”"""
+ur"""()〔〕[]{}〈〉《》「」『』【】+−±×÷"""
+ur"""=≠<>≦≧∞∴♂♀°′″℃¥$¢£"""
+ur"""%#&*@§☆★○●◎◇◆□■△▲▽▼※〒"""
+ur"""→←↑↓〓∈∋⊆⊇⊂⊃∪∩∧∨¬⇒⇔∠∃∠⊥"""
+ur"""⌒∂∇≡≒≪≫√∽∝∵∫∬ʼn♯♭♪†‡¶◾"""
+ur"""─│┌┐┘└├┬┤┴┼"""
+ur"""━┃┏┓┛┗┣┫┻╋"""
+ur"""┠┯┨┷┿┝┰┥┸╂"""
+ur"""ヲ-゚ぁ-ん~ーァ-ヶ"""
+ur"""0-9A-Za-z0-9A-Za-zΑ-Ωα-ωА-я]+""", re.UNICODE)
+
+ def __init__(self, cache_path, lang, storage):
+ self._con = {}
+ self.path = cache_path
+ self.storage = storage
+ self.lang = lang
+ if lang == "ja":
+ self.split_text = self.split_japanese_text
+ self.filename = os.path.join(cache_path, 'index.sqlite3')
+ if not os.path.isdir(self.path):
+ self.empty = True
+ os.makedirs(self.path)
+ elif not os.path.exists(self.filename):
+ self.empty = True
+ else:
+ self.empty = False
+ self.init_db(self.con)
+
+ def init_db(self, con):
+ con.execute('CREATE TABLE IF NOT EXISTS titles '
+ '(id INTEGER PRIMARY KEY, title VARCHAR);')
+ con.execute('CREATE TABLE IF NOT EXISTS words '
+ '(word VARCHAR, page INTEGER, count INTEGER);')
+ con.execute('CREATE INDEX IF NOT EXISTS index1 '
+ 'ON words (page);')
+ con.execute('CREATE INDEX IF NOT EXISTS index2 '
+ 'ON words (word);')
+ con.execute('CREATE TABLE IF NOT EXISTS links '
+ '(src INTEGER, target INTEGER, label VARCHAR, number INTEGER);')
+ con.commit()
+
+ @property
+ def con(self):
+ """Keep one connection per thread."""
+
+ thread_id = thread.get_ident()
+ try:
+ return self._con[thread_id]
+ except KeyError:
+ connection = sqlite3.connect(self.filename)
+ self._con[thread_id] = connection
+ return connection
+
+ def split_text(self, text):
+ """Splits text into words"""
+
+ for match in self.word_pattern.finditer(text):
+ word = match.group(0)
+ yield word.lower()
+
+ def split_japanese_text(self, text):
+ """Splits text into words, including rules for Japanese"""
+
+ for match in self.word_pattern.finditer(text):
+ word = match.group(0)
+ got_japanese = False
+ for m in self.jword_pattern.finditer(word):
+ w = m.group(0)
+ got_japanese = True
+ yield w.lower()
+ if not got_japanese:
+ yield word.lower()
+
+ def count_words(self, words):
+ count = {}
+ for word in words:
+ count[word] = count.get(word, 0) + 1
+ return count
+
+ def title_id(self, title, con):
+ c = con.execute('SELECT id FROM titles WHERE title=?;', (title,))
+ idents = c.fetchone()
+ if idents is None:
+ con.execute('INSERT INTO titles (title) VALUES (?);', (title,))
+ c = con.execute('SELECT LAST_INSERT_ROWID();')
+ idents = c.fetchone()
+ return idents[0]
+
+ def update_words(self, title, text, cursor):
+ title_id = self.title_id(title, cursor)
+ cursor.execute('DELETE FROM words WHERE page=?;', (title_id,))
+ if not text:
+ return
+ words = self.count_words(self.split_text(text))
+ title_words = self.count_words(self.split_text(title))
+ for word, count in title_words.iteritems():
+ words[word] = words.get(word, 0) + count
+ for word, count in words.iteritems():
+ cursor.execute('INSERT INTO words VALUES (?, ?, ?);',
+ (word, title_id, count))
+
+ def update_links(self, title, links_and_labels, cursor):
+ title_id = self.title_id(title, cursor)
+ cursor.execute('DELETE FROM links WHERE src=?;', (title_id,))
+ for number, (link, label) in enumerate(links_and_labels):
+ cursor.execute('INSERT INTO links VALUES (?, ?, ?, ?);',
+ (title_id, link, label, number))
+
+ def orphaned_pages(self):
+ """Gives all pages with no links to them."""
+
+ con = self.con
+ try:
+ sql = ('SELECT title FROM titles '
+ 'WHERE NOT EXISTS '
+ '(SELECT * FROM links WHERE target=title) '
+ 'ORDER BY title;')
+ for (title,) in con.execute(sql):
+ yield unicode(title)
+ finally:
+ con.commit()
+
+ def wanted_pages(self):
+ """Gives all pages that are linked to, but don't exist, together with
+ the number of links."""
+
+ con = self.con
+ try:
+ sql = ('SELECT COUNT(*), target FROM links '
+ 'WHERE NOT EXISTS '
+ '(SELECT * FROM titles WHERE target=title) '
+ 'GROUP BY target ORDER BY -COUNT(*);')
+ for (refs, db_title,) in con.execute(sql):
+ title = unicode(db_title)
+ yield refs, title
+ finally:
+ con.commit()
+
+ def page_backlinks(self, title):
+ """Gives a list of pages linking to specified page."""
+
+ con = self.con # sqlite3.connect(self.filename)
+ try:
+ sql = ('SELECT DISTINCT(titles.title) '
+ 'FROM links, titles '
+ 'WHERE links.target=? AND titles.id=links.src '
+ 'ORDER BY titles.title;')
+ for (backlink,) in con.execute(sql, (title,)):
+ yield unicode(backlink)
+ finally:
+ con.commit()
+
+ def page_links(self, title):
+ """Gives a list of links on specified page."""
+
+ con = self.con # sqlite3.connect(self.filename)
+ try:
+ title_id = self.title_id(title, con)
+ sql = 'SELECT target FROM links WHERE src=? ORDER BY number;'
+ for (link,) in con.execute(sql, (title_id,)):
+ yield unicode(link)
+ finally:
+ con.commit()
+
+ def page_links_and_labels(self, title):
+ con = self.con # sqlite3.connect(self.filename)
+ try:
+ title_id = self.title_id(title, con)
+ sql = ('SELECT target, label FROM links '
+ 'WHERE src=? ORDER BY number;')
+ for link, label in con.execute(sql, (title_id,)):
+ yield unicode(link), unicode(label)
+ finally:
+ con.commit()
+
+ def find(self, words):
+ """Iterator of all pages containing the words, and their scores."""
+
+ con = self.con
+ try:
+ ranks = []
+ for word in words:
+ # Calculate popularity of each word.
+ sql = 'SELECT SUM(words.count) FROM words WHERE word LIKE ?;'
+ rank = con.execute(sql, ('%%%s%%' % word,)).fetchone()[0]
+ # If any rank is 0, there will be no results anyways
+ if not rank:
+ return
+ ranks.append((rank, word))
+ ranks.sort()
+ # Start with the least popular word. Get all pages that contain it.
+ first_rank, first = ranks[0]
+ rest = ranks[1:]
+ sql = ('SELECT words.page, titles.title, SUM(words.count) '
+ 'FROM words, titles '
+ 'WHERE word LIKE ? AND titles.id=words.page '
+ 'GROUP BY words.page;')
+ first_counts = con.execute(sql, ('%%%s%%' % first,))
+ # Check for the rest of words
+ for title_id, title, first_count in first_counts:
+ # Score for the first word
+ score = float(first_count) / first_rank
+ for rank, word in rest:
+ sql = ('SELECT SUM(count) FROM words '
+ 'WHERE page=? AND word LIKE ?;')
+ count = con.execute(sql,
+ (title_id, '%%%s%%' % word)).fetchone()[0]
+ if not count:
+ # If page misses any of the words, its score is 0
+ score = 0
+ break
+ score += float(count) / rank
+ if score > 0:
+ yield int(100 * score), unicode(title)
+ finally:
+ con.commit()
+
+ def reindex_page(self, page, title, cursor, text=None):
+ """Updates the content of the database, needs locks around."""
+
+ if text is None:
+ get_text = getattr(page, 'plain_text', lambda: u'')
+ try:
+ text = get_text()
+ except error.NotFoundErr:
+ text = None
+ title_id = self.title_id(title, cursor)
+ if not list(self.page_backlinks(title)):
+ cursor.execute("DELETE FROM titles WHERE id=?;",
+ (title_id,))
+ extract_links = getattr(page, 'extract_links', None)
+ if extract_links and text:
+ links = extract_links(text)
+ else:
+ links = []
+ self.update_links(title, links, cursor=cursor)
+ self.update_words(title, text or u'', cursor=cursor)
+
+ def update_page(self, page, title, data=None, text=None):
+ """Updates the index with new page content, for a single page."""
+
+ if text is None and data is not None:
+ text = unicode(data, self.storage.charset, 'replace')
+ cursor = self.con.cursor()
+ try:
+ self.set_last_revision(self.storage.repo_revision())
+ self.reindex_page(page, title, cursor, text)
+ self.con.commit()
+ except:
+ self.con.rollback()
+ raise
+
+ def reindex(self, wiki, pages):
+ """Updates specified pages in bulk."""
+
+ cursor = self.con.cursor()
+ try:
+ for title in pages:
+ page = wiki.get_page(None, title)
+ self.reindex_page(page, title, cursor)
+ self.con.commit()
+ self.empty = False
+ except:
+ self.con.rollback()
+ raise
+
+ def set_last_revision(self, rev):
+ """Store the last indexed repository revision."""
+
+ # We use % here because the sqlite3's substitiution doesn't work
+ # We store revision 0 as 1, 1 as 2, etc. because 0 means "no revision"
+ self.con.execute('PRAGMA USER_VERSION=%d;' % (int(rev + 1),))
+
+ def get_last_revision(self):
+ """Retrieve the last indexed repository revision."""
+
+ con = self.con
+ c = con.execute('PRAGMA USER_VERSION;')
+ rev = c.fetchone()[0]
+ # -1 means "no revision", 1 means revision 0, 2 means revision 1, etc.
+ return rev - 1
+
+ def update(self, wiki):
+ """Reindex al pages that changed since last indexing."""
+
+ last_rev = self.get_last_revision()
+ if last_rev == -1:
+ changed = self.storage.all_pages()
+ else:
+ changed = self.storage.changed_since(last_rev)
+ self.reindex(wiki, changed)
+ rev = self.storage.repo_revision()
+ self.set_last_revision(rev)
diff --git a/websdk/hatta/storage.py b/websdk/hatta/storage.py
new file mode 100644
index 0000000..f1f2ee8
--- /dev/null
+++ b/websdk/hatta/storage.py
@@ -0,0 +1,586 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import os
+import tempfile
+import thread
+import re
+import werkzeug
+import datetime
+import errno
+
+# Note: we have to set these before importing Mercurial
+os.environ['HGENCODING'] = 'utf-8'
+os.environ['HGMERGE'] = "internal:merge"
+
+import mercurial.hg
+import mercurial.ui
+import mercurial.revlog
+import mercurial.util
+import mercurial.hgweb
+import mercurial.commands
+import mercurial.merge
+
+import error
+import page
+
+
+def locked_repo(func):
+ """A decorator for locking the repository when calling a method."""
+
+ def new_func(self, *args, **kwargs):
+ """Wrap the original function in locks."""
+
+ wlock = self.repo.wlock()
+ lock = self.repo.lock()
+ try:
+ func(self, *args, **kwargs)
+ finally:
+ lock.release()
+ wlock.release()
+
+ return new_func
+
+
+def _find_repo_path(path):
+ """Go up the directory tree looking for a repository."""
+
+ while not os.path.isdir(os.path.join(path, ".hg")):
+ old_path, path = path, os.path.dirname(path)
+ if path == old_path:
+ return None
+ return path
+
+
+def _get_ui():
+ try:
+ ui = mercurial.ui.ui(report_untrusted=False,
+ interactive=False, quiet=True)
+ except TypeError:
+ # Mercurial 1.3 changed the way we setup the ui object.
+ ui = mercurial.ui.ui()
+ ui.quiet = True
+ ui._report_untrusted = False
+ ui.setconfig('ui', 'interactive', False)
+ return ui
+
+
+class WikiStorage(object):
+ """
+ Provides means of storing wiki pages and keeping track of their
+ change history, using Mercurial repository as the storage method.
+ """
+
+ def __init__(self, path, charset=None, _=lambda x: x, unix_eol=False,
+ extension=None):
+ """
+ Takes the path to the directory where the pages are to be kept.
+ If the directory doesn't exist, it will be created. If it's inside
+ a Mercurial repository, that repository will be used, otherwise
+ a new repository will be created in it.
+ """
+
+ self._ = _
+ self.charset = charset or 'utf-8'
+ self.unix_eol = unix_eol
+ self.extension = extension
+ self.path = os.path.abspath(path)
+ if not os.path.exists(self.path):
+ os.makedirs(self.path)
+ self.ui = _get_ui()
+ self.repo_path = _find_repo_path(self.path)
+ if self.repo_path is None:
+ # Create the repository if needed.
+ self.repo_path = self.path
+ mercurial.hg.repository(self.ui, self.repo_path, create=True)
+ self.repo_prefix = self.path[len(self.repo_path):].strip('/')
+ self._repos = {}
+
+ def reopen(self):
+ """Close and reopen the repo, to make sure we are up to date."""
+
+ self._repos = {}
+
+ @property
+ def repo(self):
+ """Keep one open repository per thread."""
+
+ thread_id = thread.get_ident()
+ try:
+ return self._repos[thread_id]
+ except KeyError:
+ repo = mercurial.hg.repository(self.ui, self.repo_path)
+ self._repos[thread_id] = repo
+ return repo
+
+ def _check_path(self, path):
+ """
+ Ensure that the path is within allowed bounds.
+ """
+
+ _ = self._
+ abspath = os.path.abspath(path)
+ if os.path.islink(path) or os.path.isdir(path):
+ raise error.ForbiddenErr(
+ _(u"Can't use symbolic links or directories as pages"))
+ if not abspath.startswith(self.path):
+ raise error.ForbiddenErr(
+ _(u"Can't read or write outside of the pages repository"))
+
+ def _file_path(self, title):
+ return os.path.join(self.repo_path, self._title_to_file(title))
+
+ def _title_to_file(self, title):
+ title = unicode(title).strip()
+ filename = werkzeug.url_quote(title, safe='')
+ # Escape special windows filenames and dot files
+ _windows_device_files = ('CON', 'AUX', 'COM1', 'COM2', 'COM3',
+ 'COM4', 'LPT1', 'LPT2', 'LPT3', 'PRN',
+ 'NUL')
+ if (filename.split('.')[0].upper() in _windows_device_files or
+ filename.startswith('_') or filename.startswith('.')):
+ filename = '_' + filename
+ if page.page_mime(title) == 'text/x-wiki' and self.extension:
+ filename += self.extension
+ return os.path.join(self.repo_prefix, filename)
+
+ def _file_to_title(self, filepath):
+ _ = self._
+ if not filepath.startswith(self.repo_prefix):
+ raise error.ForbiddenErr(
+ _(u"Can't read or write outside of the pages repository"))
+ name = filepath[len(self.repo_prefix):].strip('/')
+ # Un-escape special windows filenames and dot files
+ if name.startswith('_') and len(name) > 1:
+ name = name[1:]
+ if self.extension and name.endswith(self.extension):
+ name = name[:-len(self.extension)]
+ return werkzeug.url_unquote(name)
+
+ def __contains__(self, title):
+ if title:
+ file_path = self._file_path(title)
+ return os.path.isfile(file_path) and not os.path.islink(file_path)
+
+ def __iter__(self):
+ return self.all_pages()
+
+ def merge_changes(self, changectx, repo_file, text, user, parent):
+ """Commits and merges conflicting changes in the repository."""
+
+ _ = self._
+ tip_node = changectx.node()
+ filectx = changectx[repo_file].filectx(parent)
+ parent_node = filectx.changectx().node()
+
+ self.repo.dirstate.setparents(parent_node)
+ node = self._commit([repo_file], text, user)
+
+ partial = lambda filename: repo_file == filename
+ try:
+ mercurial.merge.update(self.repo, tip_node, True, True, partial)
+ msg = _(u'merge of edit conflict')
+ except mercurial.util.Abort:
+ msg = _(u'failed merge of edit conflict')
+ self.repo.dirstate.setparents(tip_node, node)
+ # Mercurial 1.1 and later need updating the merge state
+ try:
+ mergestate = mercurial.merge.mergestate
+ except AttributeError:
+ pass
+ else:
+ state = mergestate(self.repo)
+ try:
+ state.mark(repo_file, "r")
+ except KeyError:
+ # There were no conflicts to mark
+ pass
+ else:
+ # Mercurial 1.7+ needs a commit
+ try:
+ commit = state.commit
+ except AttributeError:
+ pass
+ else:
+ commit()
+ return msg
+
+ @locked_repo
+ def save_file(self, title, file_name, author=u'', comment=u'',
+ parent=None):
+ """Save an existing file as specified page."""
+
+ _ = self._
+ user = author.encode('utf-8') or _(u'anon').encode('utf-8')
+ text = comment.encode('utf-8') or _(u'comment').encode('utf-8')
+ repo_file = self._title_to_file(title)
+ file_path = self._file_path(title)
+ self._check_path(file_path)
+ try:
+ mercurial.util.rename(file_name, file_path)
+ except OSError, e:
+ if e.errno == errno.ENAMETOOLONG:
+ # "File name too long"
+ raise error.RequestURITooLarge()
+ else:
+ raise
+ changectx = self._changectx()
+ try:
+ # Mercurial 1.5 and earlier have .add() on the repo
+ add = self.repo.add
+ except AttributeError:
+ # Mercurial 1.6
+ add = self.repo[None].add
+ try:
+ filectx_tip = changectx[repo_file]
+ current_page_rev = filectx_tip.filerev()
+ except mercurial.revlog.LookupError:
+ add([repo_file])
+ current_page_rev = -1
+ if parent is not None and current_page_rev != parent:
+ msg = self.merge_changes(changectx, repo_file, text, user, parent)
+ user = '<wiki>'
+ text = msg.encode('utf-8')
+ self._commit([repo_file], text, user)
+
+ def _commit(self, files, text, user):
+ try:
+ return self.repo.commit(files=files, text=text, user=user,
+ force=True, empty_ok=True)
+ except TypeError:
+ # Mercurial 1.3 doesn't accept empty_ok or files parameter
+ match = mercurial.match.exact(self.repo_path, '', list(files))
+ return self.repo.commit(match=match, text=text, user=user,
+ force=True)
+
+ def save_data(self, title, data, author=u'', comment=u'', parent=None):
+ """Save data as specified page."""
+
+ try:
+ temp_path = tempfile.mkdtemp(dir=self.path)
+ file_path = os.path.join(temp_path, 'saved')
+ f = open(file_path, "wb")
+ f.write(data)
+ f.close()
+ self.save_file(title, file_path, author, comment, parent)
+ finally:
+ try:
+ os.unlink(file_path)
+ except OSError:
+ pass
+ try:
+ os.rmdir(temp_path)
+ except OSError:
+ pass
+
+ def save_text(self, title, text, author=u'', comment=u'', parent=None):
+ """Save text as specified page, encoded to charset."""
+
+ data = text.encode(self.charset)
+ if self.unix_eol:
+ data = data.replace('\r\n', '\n')
+ self.save_data(title, data, author, comment, parent)
+
+ def page_text(self, title):
+ """Read unicode text of a page."""
+
+ data = self.open_page(title).read()
+ text = unicode(data, self.charset, 'replace')
+ return text
+
+ def page_lines(self, page):
+ for data in page.xreadlines():
+ yield unicode(data, self.charset, 'replace')
+
+ @locked_repo
+ def delete_page(self, title, author=u'', comment=u''):
+ user = author.encode('utf-8') or 'anon'
+ text = comment.encode('utf-8') or 'deleted'
+ repo_file = self._title_to_file(title)
+ file_path = self._file_path(title)
+ self._check_path(file_path)
+ try:
+ # Mercurial 1.5 and earlier have .remove() on the repo
+ remove = self.repo.remove
+ except AttributeError:
+ # Mercurial 1.6
+ remove = self.repo[None].remove
+ try:
+ os.unlink(file_path)
+ except OSError:
+ pass
+ remove([repo_file])
+ self._commit([repo_file], text, user)
+
+ def open_page(self, title):
+ """Open the page and return a file-like object with its contents."""
+
+ file_path = self._file_path(title)
+ self._check_path(file_path)
+ try:
+ return open(file_path, "rb")
+ except IOError:
+ raise error.NotFoundErr()
+
+ def page_file_meta(self, title):
+ """Get page's inode number, size and last modification time."""
+
+ try:
+ (st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size,
+ st_atime, st_mtime, st_ctime) = os.stat(self._file_path(title))
+ except OSError:
+ return 0, 0, 0
+ return st_ino, st_size, st_mtime
+
+ def page_meta(self, title):
+ """Get page's revision, date, last editor and his edit comment."""
+
+ filectx_tip = self._find_filectx(title)
+ if filectx_tip is None:
+ raise error.NotFoundErr()
+ #return -1, None, u'', u''
+ rev = filectx_tip.filerev()
+ filectx = filectx_tip.filectx(rev)
+ date = datetime.datetime.fromtimestamp(filectx.date()[0])
+ author = unicode(filectx.user(), "utf-8",
+ 'replace').split('<')[0].strip()
+ comment = unicode(filectx.description(), "utf-8", 'replace')
+ return rev, date, author, comment
+
+ def repo_revision(self):
+ """Give the latest revision of the repository."""
+
+ return self._changectx().rev()
+
+ def _changectx(self):
+ """Get the changectx of the tip."""
+
+ try:
+ # This is for Mercurial 1.0
+ return self.repo.changectx()
+ except TypeError:
+ # Mercurial 1.3 (and possibly earlier) needs an argument
+ return self.repo.changectx('tip')
+
+ def _find_filectx(self, title):
+ """Find the last revision in which the file existed."""
+
+ repo_file = self._title_to_file(title)
+ stack = [self._changectx()]
+ while stack:
+ changectx = stack.pop()
+ if repo_file in changectx:
+ return changectx[repo_file]
+ if changectx.rev() == 0:
+ return None
+ for parent in changectx.parents():
+ if parent != changectx:
+ stack.append(parent)
+ return None
+
+ def page_history(self, title):
+ """Iterate over the page's history."""
+
+ filectx_tip = self._find_filectx(title)
+ if filectx_tip is None:
+ return
+ maxrev = filectx_tip.filerev()
+ minrev = 0
+ for rev in range(maxrev, minrev - 1, -1):
+ filectx = filectx_tip.filectx(rev)
+ date = datetime.datetime.fromtimestamp(filectx.date()[0])
+ author = unicode(filectx.user(), "utf-8",
+ 'replace').split('<')[0].strip()
+ comment = unicode(filectx.description(), "utf-8", 'replace')
+ yield rev, date, author, comment
+
+ def page_revision(self, title, rev):
+ """Get binary content of the specified revision of the page."""
+
+ filectx_tip = self._find_filectx(title)
+ if filectx_tip is None:
+ raise error.NotFoundErr()
+ try:
+ data = filectx_tip.filectx(rev).data()
+ except IndexError:
+ raise error.NotFoundErr()
+ return data
+
+ def revision_text(self, title, rev):
+ """Get unicode text of the specified revision of the page."""
+
+ data = self.page_revision(title, rev)
+ text = unicode(data, self.charset, 'replace')
+ return text
+
+ def history(self):
+ """Iterate over the history of entire wiki."""
+
+ changectx = self._changectx()
+ maxrev = changectx.rev()
+ minrev = 0
+ for wiki_rev in range(maxrev, minrev - 1, -1):
+ change = self.repo.changectx(wiki_rev)
+ date = datetime.datetime.fromtimestamp(change.date()[0])
+ author = unicode(change.user(), "utf-8",
+ 'replace').split('<')[0].strip()
+ comment = unicode(change.description(), "utf-8", 'replace')
+ for repo_file in change.files():
+ if repo_file.startswith(self.repo_prefix):
+ title = self._file_to_title(repo_file)
+ try:
+ rev = change[repo_file].filerev()
+ except mercurial.revlog.LookupError:
+ rev = -1
+ yield title, rev, date, author, comment
+
+ def all_pages(self):
+ """Iterate over the titles of all pages in the wiki."""
+
+ for filename in os.listdir(self.path):
+ file_path = os.path.join(self.path, filename)
+ file_repopath = os.path.join(self.repo_prefix, filename)
+ if (os.path.isfile(file_path)
+ and not os.path.islink(file_path)
+ and not filename.startswith('.')):
+ yield self._file_to_title(file_repopath)
+
+ def changed_since(self, rev):
+ """
+ Return all pages that changed since specified repository revision.
+ """
+
+ try:
+ last = self.repo.lookup(int(rev))
+ except IndexError:
+ for page in self.all_pages():
+ yield page
+ return
+ current = self.repo.lookup('tip')
+ status = self.repo.status(current, last)
+ modified, added, removed, deleted, unknown, ignored, clean = status
+ for filename in modified + added + removed + deleted:
+ if filename.startswith(self.repo_prefix):
+ yield self._file_to_title(filename)
+
+
+class WikiSubdirectoryStorage(WikiStorage):
+ """
+ A version of WikiStorage that keeps the subpages in real subdirectories in
+ the filesystem. Indexes supported.
+
+ """
+
+ periods_re = re.compile(r'^[.]|(?<=/)[.]')
+ slashes_re = re.compile(r'^[/]|(?<=/)[/]')
+
+ # TODO: make them configurable
+ index = "Index"
+
+ def _title_to_file(self, title):
+ """
+ Modified escaping allowing (some) slashes and spaces.
+ If the entry is a directory, use an index file.
+ """
+
+ title = unicode(title).strip()
+ escaped = werkzeug.url_quote(title, safe='/ ')
+ escaped = self.periods_re.sub('%2E', escaped)
+ escaped = self.slashes_re.sub('%2F', escaped)
+ path = os.path.join(self.repo_prefix, escaped)
+ if os.path.isdir(os.path.join(self.repo_path, path)):
+ path = os.path.join(path, self.index)
+ if page.page_mime(title) == 'text/x-wiki' and self.extension:
+ path += self.extension
+ return path
+
+ def _file_to_title(self, filepath):
+ """If the path points to an index file, use the directory."""
+
+ if os.path.basename(filepath) == self.index:
+ filepath = os.path.dirname(filepath)
+ return super(WikiSubdirectoryStorage, self)._file_to_title(filepath)
+
+ def turn_into_subdirectory(self, path):
+ """Turn a single-file page into an index page inside a subdirectory."""
+
+ _ = self._
+ self._check_path(path)
+ dir_path = os.path.dirname(path)
+ if not os.path.isdir(dir_path):
+ self.turn_into_subdirectory(dir_path)
+ if not os.path.exists(path):
+ os.mkdir(path)
+ return
+ try:
+ temp_dir = tempfile.mkdtemp(dir=self.path)
+ temp_path = os.path.join(temp_dir, 'saved')
+ mercurial.commands.rename(self.ui, self.repo, path, temp_path)
+ os.makedirs(path)
+ index_path = os.path.join(path, self.index)
+ mercurial.commands.rename(self.ui, self.repo, temp_path,
+ index_path)
+ finally:
+ try:
+ os.rmdir(temp_dir)
+ except OSError:
+ pass
+ def repo_path(path):
+ return path[len(self.repo_path)+1:]
+ files = [repo_path(index_path), repo_path(path)]
+ self._commit(files, _(u"made subdirectory page"), "<wiki>")
+
+ @locked_repo
+ def save_file(self, title, file_name, author=u'', comment=u'',
+ parent=None):
+ """Save the file and make the subdirectories if needed."""
+
+ path = self._file_path(title)
+ dir_path = os.path.dirname(path)
+ if not os.path.isdir(dir_path):
+ self.turn_into_subdirectory(dir_path)
+ try:
+ os.makedirs(os.path.join(self.repo_path, dir_path))
+ except OSError, e:
+ if e.errno != errno.EEXIST:
+ # "File exists"
+ raise
+ super(WikiSubdirectoryStorage, self).save_file(title, file_name,
+ author, comment, parent)
+
+ @locked_repo
+ def delete_page(self, title, author=u'', comment=u''):
+ """
+ Remove empty directories after deleting a page.
+
+ Note that Mercurial doesn't track directories, so we don't have to
+ commit after removing empty directories.
+ """
+
+ super(WikiSubdirectoryStorage, self).delete_page(title, author,
+ comment)
+ file_path = self._file_path(title)
+ self._check_path(file_path)
+ dir_path = os.path.dirname(file_path)
+ if dir_path != self.repo_path:
+ try:
+ os.removedirs(dir_path)
+ except OSError, e:
+ if e.errno != errno.ENOTEMPTY:
+ # "Directory not empty"
+ raise
+
+ def all_pages(self):
+ """
+ Iterate over the titles of all pages in the wiki.
+ Include subdirectories but skip over index.
+ """
+
+ for (dirpath, dirnames, filenames) in os.walk(self.path):
+ path = dirpath[len(self.path) + 1:]
+ for name in filenames:
+ filepath = os.path.join(dirpath, name)
+ repopath = os.path.join(self.repo_prefix, path, name)
+ if (os.path.isfile(filepath)
+ and not name.startswith('.')):
+ yield self._file_to_title(repopath)
diff --git a/websdk/hatta/templates/backlinks.html b/websdk/hatta/templates/backlinks.html
new file mode 100644
index 0000000..d6fc56c
--- /dev/null
+++ b/websdk/hatta/templates/backlinks.html
@@ -0,0 +1,18 @@
+{% extends 'page.html' %}
+
+{% block meta %}<meta name="robots" content="NOINDEX, NOFOLLOW">{% endblock %}
+
+{% block page_title %}
+ <h1>{{ _("Links to %(title)s", title=title) }}</h1>
+{% endblock %}
+{% block title %}{{ _("Links to %(title)s", title=title) }} - {{ wiki.site_name }}{% endblock %}
+
+{% block content %}
+ <p>{{ _("Pages that contain a link to %(link)s.",
+ link=page.wiki_link(title)|safe)}}</p>
+ <ul class="backlinks">
+ {% for page_title in pages %}
+ <li>{{ page.wiki_link(page_title)|safe }}</li>
+ {% endfor %}
+ </ul>
+{% endblock %}
diff --git a/websdk/hatta/templates/base.html b/websdk/hatta/templates/base.html
new file mode 100644
index 0000000..c5525d3
--- /dev/null
+++ b/websdk/hatta/templates/base.html
@@ -0,0 +1,59 @@
+{% extends 'layout.html' %}
+
+{% block title %}
+ {{ special_tile or title }} - {{ wiki.site_name }}
+{% endblock %}
+
+{% block links %}
+ <link rel="stylesheet" type="text/css"
+ href="{{ url(None, wiki.style_css) }}">
+ <link rel="stylesheet" type="text/css"
+ href="{{ url(None, wiki.pygments_css) }}">
+ <link rel="shortcut icon" type="image/x-icon"
+ href="{{ url(None, wiki.favicon_ico) }}">
+ <link rel="alternate" type="application/rss+xml"
+ title="{{ wiki.site_name }} (ATOM)"
+ href="{{ url(None, wiki.atom) }}">
+ {% if edit_url %}
+ <link rel="alternate" type="application/wiki"
+ href="{{ edit_url }}">
+ {% endif %}
+{% endblock %}
+
+{% block scripts %}
+ <script type="text/javascript"
+ src="{{ url(None, wiki.scripts_js) }}"></script>
+{% endblock %}
+
+{% block logo %}
+ {% if wiki.logo_page in page.storage %}
+ <a id="hatta-logo"
+ href="{{ url(wiki.front_page) }}"><img
+ src="{{ download_url(wiki.logo_page) }}"
+ alt="[{{ wiki.logo_page }}]"
+ ></a>
+ {% endif %}
+{% endblock %}
+
+{% block search %}
+ <form action="{{ url(None, wiki.search) }}" id="hatta-search" method="GET"
+ ><div><input
+ id="hatta-search" name="q"><input
+ class="button" type="submit" value="Search"
+ ></div></form>
+{% endblock %}
+
+{% block menu %}
+ <div id="hatta-menu">
+ {% for part in page.menu() %}
+ {{ part|safe }}
+ {% endfor %}
+ </div>
+{% endblock %}
+
+{% block page_title %}
+ <h1>{{ special_title or title }}</h1>
+{% endblock %}
+
+{% block content %}{% for part in content %}{{ part|safe }}{% endfor %}{% endblock %}
+
diff --git a/websdk/hatta/templates/changes.html b/websdk/hatta/templates/changes.html
new file mode 100644
index 0000000..04bb4e3
--- /dev/null
+++ b/websdk/hatta/templates/changes.html
@@ -0,0 +1,16 @@
+{% extends "page_special.html" %}
+
+{% block page_title %}<h1>{{ _("Recent changes") }}</h1>{% endblock %}
+{% block title %}{{ _("Recent changes") }} - {{ wiki.site_name }}{% endblock %}
+
+{% block content %}
+ <ul class="changes">
+ {% for date, date_url, title, author, comment in changes %}
+ <li><a href="{{ date_url }}">{{ date_html(date)|safe }}</a>
+ <b>{{ page.wiki_link(title)|safe }}</b> . . . .
+ <i>{{ page.wiki_link("~%s" % author, author)|safe }}</i>
+ <div class="comment">{{ comment }}</div>
+ </li>
+ {% endfor %}
+ </ul>
+{% endblock %}
diff --git a/websdk/hatta/templates/edit_file.html b/websdk/hatta/templates/edit_file.html
new file mode 100644
index 0000000..6d67790
--- /dev/null
+++ b/websdk/hatta/templates/edit_file.html
@@ -0,0 +1,25 @@
+{% extends "page.html" %}
+
+{% block page_title %}<h1>{{ _("Editing \"%(title)s\"",
+ title=title) }}</h1>{% endblock %}
+{% block title %}{{ _("Editing \"%(title)s\"", title=title) }}{% endblock %}
+
+{% block content %}
+ <p>{{ _("This is a binary file, it can't be edited on a wiki. "
+ "Please upload a new version instead.") }}</p>
+ <form action="" method="POST" class="editor"
+ enctype="multipart/form-data"><div>
+ <div class="upload"><input type="file" name="data"></div>
+ <label class="comment">{{ _("Comment") }} <input
+ name="comment" value="{{ comment }}"></label>
+ <label class="comment">{{ _("Author") }} <input
+ name="author" value="{{ author }}"></label>
+ <div class="buttons">
+ <input type="submit" name="save" value="{{ _("Save") }}">
+ <input type="submit" name="cancel" value="{{ _("Cancel") }}">
+ </div>
+ <input type="hidden" name="parent" value="{{ parent }}">
+ </div></form>
+{% endblock %}
+
+{% block footer %}{% endblock %}
diff --git a/websdk/hatta/templates/edit_text.html b/websdk/hatta/templates/edit_text.html
new file mode 100644
index 0000000..41b27b8
--- /dev/null
+++ b/websdk/hatta/templates/edit_text.html
@@ -0,0 +1,29 @@
+{% extends "page.html" %}
+
+{% block page_title %}<h1>{{ _("Editing \"%(title)s\"", title=title) }}</h1>{% endblock %}
+{% block title %}{{ _("Editing \"%(title)s\"", title=title) }}{% endblock %}
+
+{% block content %}
+ <form action="" method="POST" class="editor"><div>
+ <textarea name="text" cols="80" rows="20" id="edtortext"
+ >{% for line in lines %}{{ line }}{% endfor %}</textarea>
+ <input type="hidden" name="parent" value="{{ parent }}">
+ <label class="comment">{{ _("Comment") }} <input
+ name="comment" value="{{ comment }}"></label>
+ <label class="comment">{{ _("Author") }} <input
+ name="author" value="{{ author }}"></label>
+ <div class="buttons">
+ <input type="submit" name="save" value="{{ _("Save") }}">
+ <input type="submit" name="preview" value="{{ _("Preview") }}">
+ <input type="submit" name="cancel" value="{{ _("Cancel") }}">
+ </div>
+ </div></form>
+ {% if preview %}
+ <h1 class="preview">{{ _("Preview, not saved") }}</h1>
+ <div id="hatta-preview">
+ {% for part in page.view_content(preview) %}{{ part|safe }}{% endfor %}
+ </div>
+ {% endif %}
+{% endblock %}
+
+{% block footer %}{% endblock %}
diff --git a/websdk/hatta/templates/history.html b/websdk/hatta/templates/history.html
new file mode 100644
index 0000000..d2a4d48
--- /dev/null
+++ b/websdk/hatta/templates/history.html
@@ -0,0 +1,27 @@
+{% extends 'page.html' %}
+
+{% block meta %}<meta name="robots" content="NOINDEX, NOFOLLOW">{% endblock %}
+
+{% block page_title %}
+ <h1>{{ _("History of %(title)s", title=title) }}</h1>
+{% endblock %}
+{% block title %}{{ _("History of %(title)s", title=title) }} - {{ wiki.site_name }}{% endblock %}
+
+{% block content %}
+ <p>{{ _("History of changes for %(link)s.", link=page.wiki_link(title)|safe) }}</p>
+ <form action="{{ url(title, wiki.undo, method='POST') }}" method="POST">
+ <ul id="hatta-history">
+ {% for date, date_url, rev, author, comment in history %}
+ <li><a href="{{ date_url }}"
+ >{{ date_html(date)|safe }}</a>
+ {% if edit_url %}
+ <input type="submit" name="{{ rev }}" value="{{ _('Undo') }}">
+ {% endif %}
+ . . . .
+ <i>{{ page.wiki_link('~%s' % author, author)|safe }}</i>
+ <div class="hatta-comment">{{ comment }}</div></li>
+ {% endfor %}
+ </ul>
+ <input type="hidden" name="parent" value="{{ parent_rev }}">
+ </form>
+{% endblock %}
diff --git a/websdk/hatta/templates/layout.html b/websdk/hatta/templates/layout.html
new file mode 100644
index 0000000..9754cdc
--- /dev/null
+++ b/websdk/hatta/templates/layout.html
@@ -0,0 +1,19 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
+"http://www.w3.org/TR/html4/strict.dtd">
+<html><head>
+<meta http-equiv="content-type" content="text/html;charset=utf-8">
+<title>{% block title %}{% endblock %}</title>
+ {% block links %}{% endblock %}
+ {% block meta %}{% endblock %}
+</head><body><div id="hatta-header">
+ {% block logo %}{% endblock %}
+ {% block search %}{% endblock %}
+ {% block menu %}{% endblock %}
+ {% block page_title %}{% endblock %}
+</div><div id="hatta-content">
+ {% block content %}{% endblock %}
+<div id="hatta-footer">
+ {% block footer %}{% endblock %}
+</div></div>
+ {% block scripts %}{% endblock %}
+</body></html>
diff --git a/websdk/hatta/templates/list.html b/websdk/hatta/templates/list.html
new file mode 100644
index 0000000..f5aa8a1
--- /dev/null
+++ b/websdk/hatta/templates/list.html
@@ -0,0 +1,10 @@
+{% extends 'page_special.html' %}
+
+{% block content %}
+ <p>{{ message|format(link=link|safe) }}</p>
+ <ul class="{{ class_|d('pagelist') }}">
+ {% for page_title in pages %}
+ <li>{{ page.wiki_link(page_title)|safe }}</li>
+ {% endfor %}
+ </ul>
+{% endblock %}
diff --git a/websdk/hatta/templates/page.html b/websdk/hatta/templates/page.html
new file mode 100644
index 0000000..9b42ba9
--- /dev/null
+++ b/websdk/hatta/templates/page.html
@@ -0,0 +1,15 @@
+{% extends 'base.html' %}
+
+{% block page_title %}<h1>{{ title }}</h1>{% endblock %}
+{% block title %}{{ title }} - {{ wiki.site_name }}{% endblock %}
+
+{% block footer %}
+ {% if edit_url %}
+ <a href="{{ edit_url }}"
+ class="edit">{{ _('Edit') }}</a>
+ {% endif %}
+ <a href="{{ url(title, wiki.history) }}"
+ class="hatta-history">{{ _('History') }}</a>
+ <a href="{{ url(title, wiki.backlinks) }}"
+ class="hatta-backlinks">{{ _('Backlinks') }}</a>
+{% endblock %}
diff --git a/websdk/hatta/templates/page_special.html b/websdk/hatta/templates/page_special.html
new file mode 100644
index 0000000..e4315e8
--- /dev/null
+++ b/websdk/hatta/templates/page_special.html
@@ -0,0 +1,13 @@
+{% extends 'base.html' %}
+
+{% block meta %}<meta name="robots" content="NOINDEX, NOFOLLOW">{% endblock %}
+
+{% block page_title %}<h1>{{ special_title }}</h1>{% endblock %}
+{% block title %}{{ special_title }} - {{ wiki.site_name }}{% endblock %}
+
+{% block footer %}
+ <a href="{{ url(None, wiki.recent_changes) }}" class="changes">Changes</a>
+ <a href="{{ url(None, wiki.all_pages) }}" class="index">Index</a>
+ <a href="{{ url(None, wiki.orphaned) }}" class="orphaned">Orphaned</a>
+ <a href="{{ url(None, wiki.wanted) }}" class="wanted">Wanted</a>
+{% endblock %}
diff --git a/websdk/hatta/templates/wanted.html b/websdk/hatta/templates/wanted.html
new file mode 100644
index 0000000..dcc2a2b
--- /dev/null
+++ b/websdk/hatta/templates/wanted.html
@@ -0,0 +1,17 @@
+{% extends "page_special.html" %}
+
+{% block page_title %}{{ _("Wanted pages") }}{% endblock %}
+{% block title %}{{ _("Wanted pages") }}{% endblock %}
+
+{% block content %}
+ <p>{{ _("List of pages that are linked to, but don't exist yet.") }}</p>
+ <ul class="wanted">
+ {% for refs, page_title in pages %}
+ <li><b>{{ page.wiki_link(page_title)|safe }}</b>
+ <i>(<a href="{{ url(page_title, wiki.backlinks) }}"
+ class="backlinks"
+>{{ ngettext("%(num)d reference", "%(num)d references", refs) }}</a>)</i>
+ </li>
+ {% endfor %}
+ </ul>
+{% endblock %}
diff --git a/websdk/hatta/wiki.py b/websdk/hatta/wiki.py
new file mode 100644
index 0000000..7eb961c
--- /dev/null
+++ b/websdk/hatta/wiki.py
@@ -0,0 +1,954 @@
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import gettext
+import os
+import sys
+import re
+import tempfile
+import itertools
+
+import werkzeug
+import werkzeug.routing
+import jinja2
+
+pygments = None
+try:
+ import pygments
+except ImportError:
+ pass
+
+import hatta
+import storage
+import search
+import page
+import parser
+import error
+import data
+
+import mercurial # import it after storage!
+
+
+class WikiResponse(werkzeug.BaseResponse, werkzeug.ETagResponseMixin,
+ werkzeug.CommonResponseDescriptorsMixin):
+ """A typical HTTP response class made out of Werkzeug's mixins."""
+
+ def make_conditional(self, request):
+ ret = super(WikiResponse, self).make_conditional(request)
+ # Remove all headers if it's 304, according to
+ # http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5
+ if self.status.startswith('304'):
+ self.response = []
+ try:
+ del self.content_type
+ except (AttributeError, KeyError, IndexError):
+ pass
+ try:
+ del self.content_length
+ except (AttributeError, KeyError, IndexError):
+ pass
+ try:
+ del self.headers['Content-length']
+ except (AttributeError, KeyError, IndexError):
+ pass
+ try:
+ del self.headers['Content-type']
+ except (AttributeError, KeyError, IndexError):
+ pass
+ return ret
+
+
+class WikiTempFile(object):
+ """Wrap a file for uploading content."""
+
+ def __init__(self, tmppath):
+ self.tmppath = tempfile.mkdtemp(dir=tmppath)
+ self.tmpname = os.path.join(self.tmppath, 'saved')
+ self.f = open(self.tmpname, "wb")
+
+ def read(self, *args, **kw):
+ return self.f.read(*args, **kw)
+
+ def readlines(self, *args, **kw):
+ return self.f.readlines(*args, **kw)
+
+ def write(self, *args, **kw):
+ return self.f.write(*args, **kw)
+
+ def seek(self, *args, **kw):
+ return self.f.seek(*args, **kw)
+
+ def truncate(self, *args, **kw):
+ return self.f.truncate(*args, **kw)
+
+ def close(self, *args, **kw):
+ ret = self.f.close(*args, **kw)
+ try:
+ os.unlink(self.tmpname)
+ except OSError:
+ pass
+ try:
+ os.rmdir(self.tmppath)
+ except OSError:
+ pass
+ return ret
+
+
+class WikiRequest(werkzeug.BaseRequest, werkzeug.ETagRequestMixin):
+ """
+ A Werkzeug's request with additional functions for handling file
+ uploads and wiki-specific link generation.
+ """
+
+ charset = 'utf-8'
+ encoding_errors = 'ignore'
+
+ def __init__(self, wiki, adapter, environ, **kw):
+ werkzeug.BaseRequest.__init__(self, environ, shallow=False, **kw)
+ self.wiki = wiki
+ self.adapter = adapter
+ self.tmpfiles = []
+ self.tmppath = wiki.path
+
+ def get_url(self, title=None, view=None, method='GET',
+ external=False, **kw):
+ if view is None:
+ view = self.wiki.view
+ if title is not None:
+ kw['title'] = title.strip()
+ return self.adapter.build(view, kw, method=method,
+ force_external=external)
+
+ def get_download_url(self, title):
+ return self.get_url(title, view=self.wiki.download)
+
+ def get_author(self):
+ """Try to guess the author name. Use IP address as last resort."""
+
+ try:
+ cookie = werkzeug.url_unquote(self.cookies.get("author", ""))
+ except UnicodeError:
+ cookie = None
+ try:
+ auth = werkzeug.url_unquote(self.environ.get('REMOTE_USER', ""))
+ except UnicodeError:
+ auth = None
+ author = (self.form.get("author") or cookie or auth or
+ self.remote_addr)
+ return author
+
+ def _get_file_stream(self, total_content_length=None, content_type=None,
+ filename=None, content_length=None):
+ """Save all the POSTs to temporary files."""
+
+ temp_file = WikiTempFile(self.tmppath)
+ self.tmpfiles.append(temp_file)
+ return temp_file
+
+ def cleanup(self):
+ """Clean up the temporary files created by POSTs."""
+
+ for temp_file in self.tmpfiles:
+ temp_file.close()
+ self.tmpfiles = []
+
+
+class WikiTitleConverter(werkzeug.routing.PathConverter):
+ """Behaves like the path converter, but doesn't match the "+ pages"."""
+
+ def to_url(self, value):
+ return werkzeug.url_quote(value.strip(), self.map.charset, safe="/")
+
+ regex = '([^+%]|%[^2]|%2[^Bb]).*'
+
+
+class WikiAllConverter(werkzeug.routing.BaseConverter):
+ """Matches everything."""
+
+ regex = '.*'
+
+
+class URL(object):
+ """A decorator for marking methods as endpoints for URLs."""
+
+ urls = []
+
+ def __init__(self, url, methods=None):
+ """Create a decorator with specified parameters."""
+
+ self.url = url
+ self.methods = methods or ['GET', 'HEAD']
+
+ def __call__(self, func):
+ """The actual decorator only records the data."""
+
+ self.urls.append((func.__name__, self.url, self.methods))
+ return func
+
+ @classmethod
+ def rules(cls, app):
+ """Returns the routing rules, using app's bound methods."""
+
+ for name, url, methods in cls.urls:
+ func = getattr(app, name, None)
+ if not callable(func):
+ continue
+ yield werkzeug.routing.Rule(url, endpoint=func, methods=methods)
+
+
+class Wiki(object):
+ """
+ The main class of the wiki, handling initialization of the whole
+ application and most of the logic.
+ """
+ storage_class = storage.WikiStorage
+ index_class = search.WikiSearch
+ filename_map = page.filename_map
+ mime_map = page.mime_map
+ icon = data.icon
+ scripts = data.scripts
+ style = data.style
+
+ def __init__(self, config):
+ if config.get_bool('show_version', False):
+ sys.stdout.write("Hatta %s\n" % hatta.__version__)
+ sys.exit()
+ self.dead = False
+ self.config = config
+
+ self.language = config.get('language', None)
+ if self.language is not None:
+ try:
+ translation = gettext.translation('hatta', 'locale',
+ languages=[self.language])
+
+ except IOError:
+ translation = gettext.translation('hatta', fallback=True,
+ languages=[self.language])
+ else:
+ translation = gettext.translation('hatta', fallback=True)
+ self.gettext = translation.ugettext
+ self.template_env = jinja2.Environment(
+ extensions=['jinja2.ext.i18n'],
+ loader=jinja2.PackageLoader('hatta', 'templates'),
+ )
+ self.template_env.autoescape = True
+ self.template_env.install_gettext_translations(translation, True)
+ self.path = os.path.abspath(config.get('pages_path', 'docs'))
+ self.page_charset = config.get('page_charset', 'utf-8')
+ self.menu_page = self.config.get('menu_page', u'Menu')
+ self.front_page = self.config.get('front_page', u'Home')
+ self.logo_page = self.config.get('logo_page', u'logo.png')
+ self.locked_page = self.config.get('locked_page', u'Locked')
+ self.site_name = self.config.get('site_name', u'Hatta Wiki')
+ self.read_only = self.config.get_bool('read_only', False)
+ self.icon_page = self.config.get('icon_page', None)
+ self.alias_page = self.config.get('alias_page', 'Alias')
+ self.pygments_style = self.config.get('pygments_style', 'tango')
+ self.subdirectories = self.config.get_bool('subdirectories', False)
+ self.extension = self.config.get('extension', None)
+ self.unix_eol = self.config.get_bool('unix_eol', False)
+ if self.subdirectories:
+ self.storage = storage.WikiSubdirectoryStorage(self.path,
+ self.page_charset,
+ self.gettext,
+ self.unix_eol,
+ self.extension)
+ else:
+ self.storage = self.storage_class(self.path, self.page_charset,
+ self.gettext, self.unix_eol,
+ self.extension)
+ self.cache = os.path.abspath(config.get('cache_path',
+ os.path.join(self.storage.repo_path,
+ '.hg', 'hatta', 'cache')))
+ self.index = self.index_class(self.cache, self.language, self.storage)
+ self.index.update(self)
+ self.url_rules = URL.rules(self)
+ self.url_map = werkzeug.routing.Map(self.url_rules, converters={
+ 'title': WikiTitleConverter,
+ 'all': WikiAllConverter,
+ })
+
+ def add_url_rule(self, rule):
+ """Let plugins add additional url rules."""
+
+ self.url_rules.append(rule)
+ self.url_map = werkzeug.routing.Map(self.url_rules, converters={
+ 'title': WikiTitleConverter,
+ 'all': WikiAllConverter,
+ })
+
+ def get_page(self, request, title):
+ """Creates a page object based on page's mime type"""
+
+ if title:
+ try:
+ page_class, mime = self.filename_map[title]
+ except KeyError:
+ mime = page.page_mime(title)
+ major, minor = mime.split('/', 1)
+ try:
+ page_class = self.mime_map[mime]
+ except KeyError:
+ try:
+ plus_pos = minor.find('+')
+ if plus_pos > 0:
+ minor_base = minor[plus_pos:]
+ else:
+ minor_base = ''
+ base_mime = '/'.join([major, minor_base])
+ page_class = self.mime_map[base_mime]
+ except KeyError:
+ try:
+ page_class = self.mime_map[major]
+ except KeyError:
+ page_class = self.mime_map['']
+ else:
+ page_class = page.WikiPageSpecial
+ mime = ''
+ return page_class(self, request, title, mime)
+
+ def response(self, request, title, content, etag='', mime='text/html',
+ rev=None, size=None):
+ """Create a WikiResponse for a page."""
+
+ response = WikiResponse(content, mimetype=mime)
+ if rev is None:
+ inode, _size, mtime = self.storage.page_file_meta(title)
+ response.set_etag(u'%s/%s/%d-%d' % (etag,
+ werkzeug.url_quote(title),
+ inode, mtime))
+ if size == -1:
+ size = _size
+ else:
+ response.set_etag(u'%s/%s/%s' % (etag, werkzeug.url_quote(title),
+ rev))
+ if size:
+ response.content_length = size
+ response.make_conditional(request)
+ return response
+
+ def _check_lock(self, title):
+ _ = self.gettext
+ restricted_pages = [
+ 'scripts.js',
+ 'robots.txt',
+ ]
+ if self.read_only:
+ raise error.ForbiddenErr(_(u"This site is read-only."))
+ if title in restricted_pages:
+ raise error.ForbiddenErr(_(u"""Can't edit this page.
+It can only be edited by the site admin directly on the disk."""))
+ if title in self.index.page_links(self.locked_page):
+ raise error.ForbiddenErr(_(u"This page is locked."))
+
+ def _serve_default(self, request, title, content, mime):
+ """Some pages have their default content."""
+
+ if title in self.storage:
+ return self.download(request, title)
+ response = WikiResponse(content, mimetype=mime)
+ response.set_etag('/%s/-1' % title)
+ response.make_conditional(request)
+ return response
+
+ @URL('/<title:title>')
+ @URL('/')
+ def view(self, request, title=None):
+ if title is None:
+ title = self.front_page
+ page = self.get_page(request, title)
+ try:
+ content = page.view_content()
+ except error.NotFoundErr:
+ url = request.get_url(title, self.edit, external=True)
+ return werkzeug.routing.redirect(url, code=303)
+ html = page.template("page.html", content=content)
+ dependencies = page.dependencies()
+ etag = '/(%s)' % u','.join(dependencies)
+ return self.response(request, title, html, etag=etag)
+
+ @URL('/+history/<title:title>/<int:rev>')
+ def revision(self, request, title, rev):
+ _ = self.gettext
+ text = self.storage.revision_text(title, rev)
+ link = werkzeug.html.a(werkzeug.html(title),
+ href=request.get_url(title))
+ content = [
+ werkzeug.html.p(
+ werkzeug.html(
+ _(u'Content of revision %(rev)d of page %(title)s:'))
+ % {'rev': rev, 'title': link}),
+ werkzeug.html.pre(werkzeug.html(text)),
+ ]
+ special_title = _(u'Revision of "%(title)s"') % {'title': title}
+ page = self.get_page(request, title)
+ html = page.template('page_special.html', content=content,
+ special_title=special_title)
+ response = self.response(request, title, html, rev=rev, etag='/old')
+ return response
+
+ @URL('/+version/')
+ @URL('/+version/<title:title>')
+ def version(self, request, title=None):
+ if title is None:
+ version = self.storage.repo_revision()
+ else:
+ try:
+ version, x, x, x = self.storage.page_history(title).next()
+ except StopIteration:
+ version = 0
+ return WikiResponse('%d' % version, mimetype="text/plain")
+
+ @URL('/+edit/<title:title>', methods=['POST'])
+ def save(self, request, title):
+ _ = self.gettext
+ self._check_lock(title)
+ url = request.get_url(title)
+ if request.form.get('cancel'):
+ if title not in self.storage:
+ url = request.get_url(self.front_page)
+ if request.form.get('preview'):
+ text = request.form.get("text")
+ if text is not None:
+ lines = text.split('\n')
+ else:
+ lines = [werkzeug.html.p(werkzeug.html(
+ _(u'No preview for binaries.')))]
+ return self.edit(request, title, preview=lines)
+ elif request.form.get('save'):
+ comment = request.form.get("comment", "")
+ author = request.get_author()
+ text = request.form.get("text")
+ try:
+ parent = int(request.form.get("parent"))
+ except (ValueError, TypeError):
+ parent = None
+ self.storage.reopen()
+ self.index.update(self)
+ page = self.get_page(request, title)
+ if text is not None:
+ if title == self.locked_page:
+ for link, label in page.extract_links(text):
+ if title == link:
+ raise error.ForbiddenErr(
+ _(u"This page is locked."))
+ if u'href="' in comment or u'http:' in comment:
+ raise error.ForbiddenErr()
+ if text.strip() == '':
+ self.storage.delete_page(title, author, comment)
+ url = request.get_url(self.front_page)
+ else:
+ self.storage.save_text(title, text, author, comment,
+ parent)
+ else:
+ text = u''
+ upload = request.files['data']
+ f = upload.stream
+ if f is not None and upload.filename is not None:
+ try:
+ self.storage.save_file(title, f.tmpname, author,
+ comment, parent)
+ except AttributeError:
+ self.storage.save_data(title, f.read(), author,
+ comment, parent)
+ else:
+ self.storage.delete_page(title, author, comment)
+ url = request.get_url(self.front_page)
+ self.index.update_page(page, title, text=text)
+ response = werkzeug.routing.redirect(url, code=303)
+ response.set_cookie('author',
+ werkzeug.url_quote(request.get_author()),
+ max_age=604800)
+ return response
+
+ @URL('/+edit/<title:title>', methods=['GET'])
+ def edit(self, request, title, preview=None):
+ self._check_lock(title)
+ exists = title in self.storage
+ if exists:
+ self.storage.reopen()
+ page = self.get_page(request, title)
+ html = page.render_editor(preview)
+ if not exists:
+ response = WikiResponse(html, mimetype="text/html",
+ status='404 Not found')
+
+ elif preview:
+ response = WikiResponse(html, mimetype="text/html")
+ else:
+ response = self.response(request, title, html, '/edit')
+ response.headers.add('Cache-Control', 'no-cache')
+ return response
+
+ @URL('/+feed/atom')
+ @URL('/+feed/rss')
+ def atom(self, request):
+ _ = self.gettext
+ feed = werkzeug.contrib.atom.AtomFeed(self.site_name,
+ feed_url=request.url,
+ url=request.adapter.build(self.view, force_external=True),
+ subtitle=_(u'Track the most recent changes to the wiki '
+ u'in this feed.'))
+ history = itertools.islice(self.storage.history(), None, 10, None)
+ unique_titles = set()
+ for title, rev, date, author, comment in history:
+ if title in unique_titles:
+ continue
+ unique_titles.add(title)
+ if rev > 0:
+ url = request.adapter.build(self.diff, {
+ 'title': title,
+ 'from_rev': rev - 1,
+ 'to_rev': rev,
+ }, force_external=True)
+ else:
+ url = request.adapter.build(self.revision, {
+ 'title': title,
+ 'rev': rev,
+ }, force_external=True)
+ feed.add(title, comment, content_type="text", author=author,
+ url=url, updated=date)
+ rev = self.storage.repo_revision()
+ response = self.response(request, 'atom', feed.generate(), '/+feed',
+ 'application/xml', rev)
+ response.make_conditional(request)
+ return response
+
+ @URL('/+download/<title:title>')
+ def download(self, request, title):
+ """Serve the raw content of a page directly from disk."""
+
+ mime = page.page_mime(title)
+ if mime == 'text/x-wiki':
+ mime = 'text/plain'
+ try:
+ wrap_file = werkzeug.wrap_file
+ except AttributeError:
+ wrap_file = lambda x, y: y
+ f = wrap_file(request.environ, self.storage.open_page(title))
+ response = self.response(request, title, f, '/download', mime, size=-1)
+ response.direct_passthrough = True
+ return response
+
+ @URL('/+render/<title:title>')
+ def render(self, request, title):
+ """Serve a thumbnail or otherwise rendered content."""
+
+ def file_time_and_size(file_path):
+ """Get file's modification timestamp and its size."""
+
+ try:
+ (st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size,
+ st_atime, st_mtime, st_ctime) = os.stat(file_path)
+ except OSError:
+ st_mtime = 0
+ st_size = None
+ return st_mtime, st_size
+
+ def rm_temp_dir(dir_path):
+ """Delete the directory with subdirectories."""
+
+ for root, dirs, files in os.walk(dir_path, topdown=False):
+ for name in files:
+ try:
+ os.remove(os.path.join(root, name))
+ except OSError:
+ pass
+ for name in dirs:
+ try:
+ os.rmdir(os.path.join(root, name))
+ except OSError:
+ pass
+ try:
+ os.rmdir(dir_path)
+ except OSError:
+ pass
+
+ page = self.get_page(request, title)
+ try:
+ cache_filename, cache_mime = page.render_mime()
+ render = page.render_cache
+ except (AttributeError, NotImplementedError):
+ return self.download(request, title)
+
+ cache_dir = os.path.join(self.cache, 'render',
+ werkzeug.url_quote(title, safe=''))
+ cache_file = os.path.join(cache_dir, cache_filename)
+ page_inode, page_size, page_mtime = self.storage.page_file_meta(title)
+ cache_mtime, cache_size = file_time_and_size(cache_file)
+ if page_mtime > cache_mtime:
+ if not os.path.exists(cache_dir):
+ os.makedirs(cache_dir)
+ try:
+ temp_dir = tempfile.mkdtemp(dir=cache_dir)
+ result_file = render(temp_dir)
+ mercurial.util.rename(result_file, cache_file)
+ finally:
+ rm_temp_dir(temp_dir)
+ try:
+ wrap_file = werkzeug.wrap_file
+ except AttributeError:
+ wrap_file = lambda x, y: y
+ f = wrap_file(request.environ, open(cache_file))
+ response = self.response(request, title, f, '/render', cache_mime,
+ size=cache_size)
+ response.direct_passthrough = True
+ return response
+
+ @URL('/+undo/<title:title>', methods=['POST'])
+ def undo(self, request, title):
+ """Revert a change to a page."""
+
+ _ = self.gettext
+ self._check_lock(title)
+ rev = None
+ for key in request.form:
+ try:
+ rev = int(key)
+ except ValueError:
+ pass
+ author = request.get_author()
+ if rev is not None:
+ try:
+ parent = int(request.form.get("parent"))
+ except (ValueError, TypeError):
+ parent = None
+ self.storage.reopen()
+ self.index.update(self)
+ if rev == 0:
+ comment = _(u'Delete page %(title)s') % {'title': title}
+ data = ''
+ self.storage.delete_page(title, author, comment)
+ else:
+ comment = _(u'Undo of change %(rev)d of page %(title)s') % {
+ 'rev': rev, 'title': title}
+ data = self.storage.page_revision(title, rev - 1)
+ self.storage.save_data(title, data, author, comment, parent)
+ page = self.get_page(request, title)
+ self.index.update_page(page, title, data=data)
+ url = request.adapter.build(self.history, {'title': title},
+ method='GET', force_external=True)
+ return werkzeug.redirect(url, 303)
+
+ @URL('/+history/<title:title>')
+ def history(self, request, title):
+ """Display history of changes of a page."""
+
+ max_rev = -1
+ history = []
+ page = self.get_page(request, title)
+ for rev, date, author, comment in self.storage.page_history(title):
+ if max_rev < rev:
+ max_rev = rev
+ if rev > 0:
+ date_url = request.adapter.build(self.diff, {
+ 'title': title, 'from_rev': rev - 1, 'to_rev': rev})
+ else:
+ date_url = request.adapter.build(self.revision, {
+ 'title': title, 'rev': rev})
+ history.append((date, date_url, rev, author, comment))
+ html = page.template('history.html', history=history,
+ date_html=hatta.page.date_html, parent=max_rev)
+ response = self.response(request, title, html, '/history')
+ return response
+
+ @URL('/+history/')
+ def recent_changes(self, request):
+ """Serve the recent changes page."""
+
+ def _changes_list():
+ last = {}
+ lastrev = {}
+ count = 0
+ for title, rev, date, author, comment in self.storage.history():
+ if (author, comment) == last.get(title, (None, None)):
+ continue
+ count += 1
+ if count > 100:
+ break
+ if rev > 0:
+ date_url = request.adapter.build(self.diff, {
+ 'title': title,
+ 'from_rev': rev - 1,
+ 'to_rev': lastrev.get(title, rev),
+ })
+ elif rev == 0:
+ date_url = request.adapter.build(self.revision, {
+ 'title': title, 'rev': rev})
+ else:
+ date_url = request.adapter.build(self.history, {
+ 'title': title})
+ last[title] = author, comment
+ lastrev[title] = rev
+
+ yield date, date_url, title, author, comment
+
+ page = self.get_page(request, '')
+ html = page.template('changes.html', changes=_changes_list(),
+ date_html=hatta.page.date_html)
+ response = WikiResponse(html, mimetype='text/html')
+ response.set_etag('/history/%d' % self.storage.repo_revision())
+ response.make_conditional(request)
+ return response
+
+ @URL('/+history/<title:title>/<int:from_rev>:<int:to_rev>')
+ def diff(self, request, title, from_rev, to_rev):
+ """Show the differences between specified revisions."""
+
+ _ = self.gettext
+ page = self.get_page(request, title)
+ build = request.adapter.build
+ from_url = build(self.revision, {'title': title, 'rev': from_rev})
+ to_url = build(self.revision, {'title': title, 'rev': to_rev})
+ a = werkzeug.html.a
+ links = {
+ 'link1': a(str(from_rev), href=from_url),
+ 'link2': a(str(to_rev), href=to_url),
+ 'link': a(werkzeug.html(title), href=request.get_url(title)),
+ }
+ message = werkzeug.html(_(
+ u'Differences between revisions %(link1)s and %(link2)s '
+ u'of page %(link)s.')) % links
+ diff_content = getattr(page, 'diff_content', None)
+ if diff_content:
+ from_text = self.storage.revision_text(page.title, from_rev)
+ to_text = self.storage.revision_text(page.title, to_rev)
+ content = page.diff_content(from_text, to_text, message)
+ else:
+ content = [werkzeug.html.p(werkzeug.html(
+ _(u"Diff not available for this kind of pages.")))]
+ special_title = _(u'Diff for "%(title)s"') % {'title': title}
+ html = page.template('page_special.html', content=content,
+ special_title=special_title)
+ response = WikiResponse(html, mimetype='text/html')
+ return response
+
+ @URL('/+index')
+ def all_pages(self, request):
+ """Show index of all pages in the wiki."""
+
+ _ = self.gettext
+ page = self.get_page(request, '')
+ html = page.template('list.html',
+ pages=sorted(self.storage.all_pages()),
+ class_='index',
+ message=_(u'Index of all pages'),
+ special_title=_(u'Page Index'))
+ response = WikiResponse(html, mimetype='text/html')
+ response.set_etag('/+index/%d' % self.storage.repo_revision())
+ response.make_conditional(request)
+ return response
+
+ @URL('/+orphaned')
+ def orphaned(self, request):
+ """Show all pages that don't have backlinks."""
+
+ _ = self.gettext
+ page = self.get_page(request, '')
+ html = page.template('list.html',
+ pages=self.index.orphaned_pages(),
+ class_='orphaned',
+ message=_(u'List of pages with no links to them'),
+ special_title=_(u'Orphaned pages'))
+ response = WikiResponse(html, mimetype='text/html')
+ response.set_etag('/+orphaned/%d' % self.storage.repo_revision())
+ response.make_conditional(request)
+ return response
+
+ @URL('/+wanted')
+ def wanted(self, request):
+ """Show all pages that don't exist yet, but are linked."""
+
+ def _wanted_pages_list():
+ for refs, title in self.index.wanted_pages():
+ if not (parser.external_link(title) or title.startswith('+')
+ or title.startswith(':')):
+ yield refs, title
+
+ page = self.get_page(request, '')
+ html = page.template('wanted.html', pages=_wanted_pages_list())
+ response = WikiResponse(html, mimetype='text/html')
+ response.set_etag('/+wanted/%d' % self.storage.repo_revision())
+ response.make_conditional(request)
+ return response
+
+ @URL('/+search', methods=['GET', 'POST'])
+ def search(self, request):
+ """Serve the search results page."""
+
+ _ = self.gettext
+
+ def search_snippet(title, words):
+ """Extract a snippet of text for search results."""
+
+ try:
+ text = self.storage.page_text(title)
+ except error.NotFoundErr:
+ return u''
+ regexp = re.compile(u"|".join(re.escape(w) for w in words),
+ re.U | re.I)
+ match = regexp.search(text)
+ if match is None:
+ return u""
+ position = match.start()
+ min_pos = max(position - 60, 0)
+ max_pos = min(position + 60, len(text))
+ snippet = werkzeug.escape(text[min_pos:max_pos])
+ highlighted = werkzeug.html.b(match.group(0), class_="highlight")
+ html = regexp.sub(highlighted, snippet)
+ return html
+
+ def page_search(words, page, request):
+ """Display the search results."""
+
+ h = werkzeug.html
+ self.storage.reopen()
+ self.index.update(self)
+ result = sorted(self.index.find(words), key=lambda x: -x[0])
+ yield werkzeug.html.p(h(_(u'%d page(s) containing all words:')
+ % len(result)))
+ yield u'<ol class="search">'
+ for number, (score, title) in enumerate(result):
+ yield h.li(h.b(page.wiki_link(title)), u' ', h.i(str(score)),
+ h.div(search_snippet(title, words),
+ _class="snippet"),
+ id_="search-%d" % (number + 1))
+ yield u'</ol>'
+
+ query = request.values.get('q', u'').strip()
+ page = self.get_page(request, '')
+ if not query:
+ url = request.get_url(view=self.all_pages, external=True)
+ return werkzeug.routing.redirect(url, code=303)
+ words = tuple(self.index.split_text(query))
+ if not words:
+ words = (query,)
+ title = _(u'Searching for "%s"') % u" ".join(words)
+ content = page_search(words, page, request)
+ html = page.template('page_special.html', content=content,
+ special_title=title)
+ return WikiResponse(html, mimetype='text/html')
+
+ @URL('/+search/<title:title>', methods=['GET', 'POST'])
+ def backlinks(self, request, title):
+ """Serve the page with backlinks."""
+
+ self.storage.reopen()
+ self.index.update(self)
+ page = self.get_page(request, title)
+ html = page.template('backlinks.html',
+ pages=self.index.page_backlinks(title))
+ response = WikiResponse(html, mimetype='text/html')
+ response.set_etag('/+search/%d' % self.storage.repo_revision())
+ response.make_conditional(request)
+ return response
+
+ @URL('/+download/scripts.js')
+ def scripts_js(self, request):
+ """Server the default scripts"""
+
+ return self._serve_default(request, 'scripts.js', self.scripts,
+ 'text/javascript')
+
+ @URL('/+download/style.css')
+ def style_css(self, request):
+ """Serve the default style"""
+
+ return self._serve_default(request, 'style.css', self.style,
+ 'text/css')
+
+ @URL('/+download/pygments.css')
+ def pygments_css(self, request):
+ """Serve the default pygments style"""
+
+ _ = self.gettext
+ if pygments is None:
+ raise error.NotImplementedErr(
+ _(u"Code highlighting is not available."))
+
+ pygments_style = self.pygments_style
+ if pygments_style not in pygments.styles.STYLE_MAP:
+ pygments_style = 'default'
+ formatter = pygments.formatters.HtmlFormatter(style=pygments_style)
+ style_defs = formatter.get_style_defs('.highlight')
+ return self._serve_default(request, 'pygments.css', style_defs,
+ 'text/css')
+
+ @URL('/favicon.ico')
+ def favicon_ico(self, request):
+ """Serve the default favicon."""
+
+ return self._serve_default(request, 'favicon.ico', self.icon,
+ 'image/x-icon')
+
+ @URL('/robots.txt')
+ def robots_txt(self, request):
+ """Serve the robots directives."""
+
+ robots = ('User-agent: *\r\n'
+ 'Disallow: /+*\r\n'
+ 'Disallow: /%2B*\r\n'
+ 'Disallow: /+edit\r\n'
+ 'Disallow: /+feed\r\n'
+ 'Disallow: /+history\r\n'
+ 'Disallow: /+search\r\n'
+ 'Disallow: /+hg\r\n')
+ return self._serve_default(request, 'robots.txt', robots,
+ 'text/plain')
+
+ @URL('/+hg<all:path>', methods=['GET', 'POST', 'HEAD'])
+ def hgweb(self, request, path=None):
+ """
+ Serve the pages repository on the web like a normal hg repository.
+ """
+
+ _ = self.gettext
+ if not self.config.get_bool('hgweb', False):
+ raise error.ForbiddenErr(_(u'Repository access disabled.'))
+ app = mercurial.hgweb.request.wsgiapplication(
+ lambda: mercurial.hgweb.hgweb(self.storage.repo, self.site_name))
+
+ def hg_app(env, start):
+ env = request.environ
+ prefix = '/+hg'
+ if env['PATH_INFO'].startswith(prefix):
+ env["PATH_INFO"] = env["PATH_INFO"][len(prefix):]
+ env["SCRIPT_NAME"] += prefix
+ return app(env, start)
+ return hg_app
+
+ @URL('/shutdown', methods=['GET'])
+ def die(self, request):
+ """Terminate the standalone server if invoked from localhost."""
+ _ = self.gettext
+ if not request.remote_addr.startswith('127.'):
+ raise error.ForbiddenErr(
+ _(u'This URL can only be called locally.'))
+
+ if not 'werkzeug.server.shutdown' in request.environ:
+ raise RuntimeError('Not running the development server')
+ request.environ['werkzeug.server.shutdown']()
+
+ def agony(request):
+ yield u'Goodbye!'
+ self.dead = True
+
+ return WikiResponse(agony(request), mimetype='text/plain')
+
+ @werkzeug.responder
+ def application(self, environ, start):
+ """The main application loop."""
+
+ adapter = self.url_map.bind_to_environ(environ)
+ request = WikiRequest(self, adapter, environ)
+ try:
+ try:
+ endpoint, values = adapter.match()
+ return endpoint(request, **values)
+ except werkzeug.exceptions.HTTPException, err:
+ return err
+ finally:
+ request.cleanup()
+ del request
+ del adapter