diff options
author | Aleksey Lim <alsroot@member.fsf.org> | 2009-03-02 06:03:47 (GMT) |
---|---|---|
committer | Walter Bender <walter@walter-laptop.(none)> | 2009-03-02 14:45:14 (GMT) |
commit | 0b218d3045918e3c32985ced34eb70aa2764e387 (patch) | |
tree | 4462b4356a28ea7f5367a45225d379228f0ed16d /infoslicer | |
parent | edb525a310c2c165de2ee26c1885cb676f867287 (diff) |
Move sugar-free components to infoslicer/ core-library directory
Diffstat (limited to 'infoslicer')
25 files changed, 5937 insertions, 0 deletions
diff --git a/infoslicer/COPYING b/infoslicer/COPYING new file mode 100644 index 0000000..63e41a4 --- /dev/null +++ b/infoslicer/COPYING @@ -0,0 +1,339 @@ + GNU GENERAL PUBLIC LICENSE
+ Version 2, June 1991
+
+ Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The licenses for most software are designed to take away your
+freedom to share and change it. By contrast, the GNU General Public
+License is intended to guarantee your freedom to share and change free
+software--to make sure the software is free for all its users. This
+General Public License applies to most of the Free Software
+Foundation's software and to any other program whose authors commit to
+using it. (Some other Free Software Foundation software is covered by
+the GNU Lesser General Public License instead.) You can apply it to
+your programs, too.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+this service if you wish), that you receive source code or can get it
+if you want it, that you can change the software or use pieces of it
+in new free programs; and that you know you can do these things.
+
+ To protect your rights, we need to make restrictions that forbid
+anyone to deny you these rights or to ask you to surrender the rights.
+These restrictions translate to certain responsibilities for you if you
+distribute copies of the software, or if you modify it.
+
+ For example, if you distribute copies of such a program, whether
+gratis or for a fee, you must give the recipients all the rights that
+you have. You must make sure that they, too, receive or can get the
+source code. And you must show them these terms so they know their
+rights.
+
+ We protect your rights with two steps: (1) copyright the software, and
+(2) offer you this license which gives you legal permission to copy,
+distribute and/or modify the software.
+
+ Also, for each author's protection and ours, we want to make certain
+that everyone understands that there is no warranty for this free
+software. If the software is modified by someone else and passed on, we
+want its recipients to know that what they have is not the original, so
+that any problems introduced by others will not reflect on the original
+authors' reputations.
+
+ Finally, any free program is threatened constantly by software
+patents. We wish to avoid the danger that redistributors of a free
+program will individually obtain patent licenses, in effect making the
+program proprietary. To prevent this, we have made it clear that any
+patent must be licensed for everyone's free use or not licensed at all.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ GNU GENERAL PUBLIC LICENSE
+ TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+ 0. This License applies to any program or other work which contains
+a notice placed by the copyright holder saying it may be distributed
+under the terms of this General Public License. The "Program", below,
+refers to any such program or work, and a "work based on the Program"
+means either the Program or any derivative work under copyright law:
+that is to say, a work containing the Program or a portion of it,
+either verbatim or with modifications and/or translated into another
+language. (Hereinafter, translation is included without limitation in
+the term "modification".) Each licensee is addressed as "you".
+
+Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope. The act of
+running the Program is not restricted, and the output from the Program
+is covered only if its contents constitute a work based on the
+Program (independent of having been made by running the Program).
+Whether that is true depends on what the Program does.
+
+ 1. You may copy and distribute verbatim copies of the Program's
+source code as you receive it, in any medium, provided that you
+conspicuously and appropriately publish on each copy an appropriate
+copyright notice and disclaimer of warranty; keep intact all the
+notices that refer to this License and to the absence of any warranty;
+and give any other recipients of the Program a copy of this License
+along with the Program.
+
+You may charge a fee for the physical act of transferring a copy, and
+you may at your option offer warranty protection in exchange for a fee.
+
+ 2. You may modify your copy or copies of the Program or any portion
+of it, thus forming a work based on the Program, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+ a) You must cause the modified files to carry prominent notices
+ stating that you changed the files and the date of any change.
+
+ b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any
+ part thereof, to be licensed as a whole at no charge to all third
+ parties under the terms of this License.
+
+ c) If the modified program normally reads commands interactively
+ when run, you must cause it, when started running for such
+ interactive use in the most ordinary way, to print or display an
+ announcement including an appropriate copyright notice and a
+ notice that there is no warranty (or else, saying that you provide
+ a warranty) and that users may redistribute the program under
+ these conditions, and telling the user how to view a copy of this
+ License. (Exception: if the Program itself is interactive but
+ does not normally print such an announcement, your work based on
+ the Program is not required to print an announcement.)
+
+These requirements apply to the modified work as a whole. If
+identifiable sections of that work are not derived from the Program,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works. But when you
+distribute the same sections as part of a whole which is a work based
+on the Program, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Program.
+
+In addition, mere aggregation of another work not based on the Program
+with the Program (or with a work based on the Program) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+ 3. You may copy and distribute the Program (or a work based on it,
+under Section 2) in object code or executable form under the terms of
+Sections 1 and 2 above provided that you also do one of the following:
+
+ a) Accompany it with the complete corresponding machine-readable
+ source code, which must be distributed under the terms of Sections
+ 1 and 2 above on a medium customarily used for software interchange; or,
+
+ b) Accompany it with a written offer, valid for at least three
+ years, to give any third party, for a charge no more than your
+ cost of physically performing source distribution, a complete
+ machine-readable copy of the corresponding source code, to be
+ distributed under the terms of Sections 1 and 2 above on a medium
+ customarily used for software interchange; or,
+
+ c) Accompany it with the information you received as to the offer
+ to distribute corresponding source code. (This alternative is
+ allowed only for noncommercial distribution and only if you
+ received the program in object code or executable form with such
+ an offer, in accord with Subsection b above.)
+
+The source code for a work means the preferred form of the work for
+making modifications to it. For an executable work, complete source
+code means all the source code for all modules it contains, plus any
+associated interface definition files, plus the scripts used to
+control compilation and installation of the executable. However, as a
+special exception, the source code distributed need not include
+anything that is normally distributed (in either source or binary
+form) with the major components (compiler, kernel, and so on) of the
+operating system on which the executable runs, unless that component
+itself accompanies the executable.
+
+If distribution of executable or object code is made by offering
+access to copy from a designated place, then offering equivalent
+access to copy the source code from the same place counts as
+distribution of the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+ 4. You may not copy, modify, sublicense, or distribute the Program
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense or distribute the Program is
+void, and will automatically terminate your rights under this License.
+However, parties who have received copies, or rights, from you under
+this License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+ 5. You are not required to accept this License, since you have not
+signed it. However, nothing else grants you permission to modify or
+distribute the Program or its derivative works. These actions are
+prohibited by law if you do not accept this License. Therefore, by
+modifying or distributing the Program (or any work based on the
+Program), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Program or works based on it.
+
+ 6. Each time you redistribute the Program (or any work based on the
+Program), the recipient automatically receives a license from the
+original licensor to copy, distribute or modify the Program subject to
+these terms and conditions. You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties to
+this License.
+
+ 7. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Program at all. For example, if a patent
+license would not permit royalty-free redistribution of the Program by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Program.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system, which is
+implemented by public license practices. Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+ 8. If the distribution and/or use of the Program is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Program under this License
+may add an explicit geographical distribution limitation excluding
+those countries, so that distribution is permitted only in or among
+countries not thus excluded. In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+ 9. The Free Software Foundation may publish revised and/or new versions
+of the General Public License from time to time. Such new versions will
+be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program
+specifies a version number of this License which applies to it and "any
+later version", you have the option of following the terms and conditions
+either of that version or of any later version published by the Free
+Software Foundation. If the Program does not specify a version number of
+this License, you may choose any version ever published by the Free Software
+Foundation.
+
+ 10. If you wish to incorporate parts of the Program into other free
+programs whose distribution conditions are different, write to the author
+to ask for permission. For software which is copyrighted by the Free
+Software Foundation, write to the Free Software Foundation; we sometimes
+make exceptions for this. Our decision will be guided by the two goals
+of preserving the free status of all derivatives of our free software and
+of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+ 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
+OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
+TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
+PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
+REPAIR OR CORRECTION.
+
+ 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
+INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
+OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
+TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
+YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
+PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
+POSSIBILITY OF SUCH DAMAGES.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License along
+ with this program; if not, write to the Free Software Foundation, Inc.,
+ 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this
+when it starts in an interactive mode:
+
+ Gnomovision version 69, Copyright (C) year name of author
+ Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
+ This is free software, and you are welcome to redistribute it
+ under certain conditions; type `show c' for details.
+
+The hypothetical commands `show w' and `show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may
+be called something other than `show w' and `show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+ Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+ `Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+ <signature of Ty Coon>, 1 April 1989
+ Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Lesser General
+Public License instead of this License.
\ No newline at end of file diff --git a/infoslicer/README b/infoslicer/README new file mode 100644 index 0000000..1f558d1 --- /dev/null +++ b/infoslicer/README @@ -0,0 +1,23 @@ +Platform independent InfoSlicer components
+
+InfoSlicer downloads articles from Wikipedia so that you can create
+new documents by dragging and dropping content from the Wikipedia
+articles. You can then publish the articles as a mini website.
+
+Copyright (C) IBM Corporation 2008
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License along
+with this program; if not, write to the Free Software Foundation, Inc.,
+51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+Contact: Laura Cowen (laura_cowen@uk.ibm.com)
diff --git a/infoslicer/__init__.py b/infoslicer/__init__.py new file mode 100644 index 0000000..e69de29 --- /dev/null +++ b/infoslicer/__init__.py diff --git a/infoslicer/processing/Article.py b/infoslicer/processing/Article.py new file mode 100644 index 0000000..157b3c8 --- /dev/null +++ b/infoslicer/processing/Article.py @@ -0,0 +1,773 @@ +# Copyright (C) IBM Corporation 2008
+
+import pygtk
+pygtk.require('2.0')
+import gtk
+from random import Random
+from Article_Data import *
+from Section import *
+import logging
+
+logger = logging.getLogger('infoslicer')
+
+arrow_xpm = [
+"15 11 4 1",
+" c None s None",
+". c black",
+"r c #800000",
+"R c #FF0000",
+" .. ",
+" .... ",
+" .rr.. ",
+" .....rRr.. ",
+"..rrrrrRRr.. ",
+"..rRRRRRRRr.. ",
+"..rRRRRRRr.. ",
+" .....rRr.. ",
+" .rr.. ",
+" .... ",
+" .. ",
+]
+
+
+
+class Article:
+ """
+ Created by Jonathan Mace
+
+ The Article class maintains a concrete representation of the article, in the form of a gtk.TextBuffer
+
+ Positions within the text are represented by gtk.TextIter
+
+ The class contains methods for inserting and deleting new sentences, paragraphs and sections.
+
+ The class also has methods for finding the most appropriate insertion point for new sections.
+
+ The class maintains the section-based structure of the article.
+
+ At any point, the article_data class corresponding to the current state of the article can be retrieved
+ """
+
+
+ """
+ Construct article to be displayed in the GUI from the data object passed in
+ """
+ def __init__(self, article_data = Article_Data()):
+ """
+ Create default text buffer and set to empty
+ """
+ self.__buf = gtk.TextBuffer()
+ self.__buf.set_text("")
+ insertionpoint = self.__buf.get_end_iter()
+ insertionmark = self.__buf.create_mark(None, insertionpoint, False)
+
+ """
+ Set the attributes such as title, theme, id etc. as specified in the article_data parameter
+ """
+ self.id = article_data.id
+ self.article_title = article_data.article_title
+ self.article_theme = article_data.article_theme
+ self.source_article_id = article_data.source_article_id
+ self.image_list = article_data.get_image_list()
+
+ """
+ The article is currently blank, so there are no sections
+ """
+ self.__sections = []
+
+
+ """
+ Create new sections based on the section data in article_data
+ At this level, nothing is actually inserted into the textbuffer.
+ The text insertion occurs at the sentence level.
+ The sentences are created within the initialisation of the Section object.
+ """
+ sections_data = article_data.sections_data
+ for section_data in sections_data:
+ insertioniter = self.__buf.get_iter_at_mark(insertionmark)
+ self.__sections.append(Section(section_data, self.__buf, insertioniter))
+
+ self.__buf.delete_mark(insertionmark)
+
+ """
+ We also append dummy sections, containing nothing, at the start and end of the article.
+ """
+ startdummy = dummySection(self.__buf, self.__buf.get_start_iter(), True)
+ enddummy = dummySection(self.__buf, self.__buf.get_end_iter(), False)
+ self.__sections = [startdummy] + self.__sections + [enddummy]
+
+ self.markmark = None
+
+ def printsections(self):
+ """
+ This was a method to help debugging. It prints the contents of the article
+ as represented by the article/section/paragraph/sentence data structures
+ as opposed to just the contents of the text buffer. If for some reason,
+ some elements are not consistent with where they begin and end, then this
+ would become apparent by using this method
+ """
+ pass
+ """
+ for section in self.__sections:
+ print "section start: %s, end: %s, id: %s" % (section.getStart().get_offset(), section.getEnd().get_offset(), section.id)
+ paragraphs = section.paragraphs
+ for paragraph in paragraphs:
+ print " paragraph start: %s, end %s, id: %s" % (paragraph.getStart().get_offset(), paragraph.getEnd().get_offset(), paragraph.id)
+ sentences = paragraph.sentences
+ for sentence in sentences:
+ print " sentence start: %s, end: %s, id: %s, text: %s" % (sentence.getStart().get_offset(), sentence.getEnd().get_offset(), sentence.id, sentence.getText())
+ """
+
+ def getData(self):
+ """
+ Returns the article_data object corresponding to the current state of the article.
+ """
+ self.checkIntegrity()
+ id = self.id
+ source_article_id = self.source_article_id
+ article_title = self.article_title
+ article_theme = self.article_theme
+ image_list = self.image_list
+ sections_data = []
+ for section in self.__sections[1:len(self.__sections)-1]:
+ sections_data.append(section.getData())
+
+ data = Article_Data(id, source_article_id, article_title, article_theme, sections_data, image_list)
+
+ return data
+
+ def checkIntegrity(self):
+ """
+ When a user freely edits the text of an article, they can perform actions such as completely deleting a sentence,
+ or concatenating two sections, etc.
+ This method reparses the structure of the article.
+ """
+ i = 0
+ sections = []
+ while i < len(self.__sections)-1:
+ section = self.__sections[i]
+ nextsection = self.__sections[i+1]
+
+ if section.getStart().compare(nextsection.getStart()) == -1:
+ text = self.__buf.get_slice(section.getStart(), nextsection.getStart())
+ if len(text) > 2 and text[-2] != "\n":
+ nextsection.paragraphs = section.paragraphs + nextsection.paragraphs
+ else:
+ sections.extend(section.checkIntegrity(nextsection.getStart()))
+ else:
+ section.remove()
+ del self.__sections[i]
+ i = i - 1
+
+ i = i + 1
+
+ section = self.__sections[-1]
+ if section.getStart().compare(self.__buf.get_end_iter()) == -1:
+ if len(text) > 2 and text[-2] != "\n":
+ pars = section.paragraphs
+ par = pars[-1]
+ if text[-1] != "\n":
+ data = Sentence_Data(-1, -1, -1, -1, -1, "\n", None)
+ pars[-2].sentences.append(Sentence(data, self.__buf, par.getStart()))
+ data = Paragraph_Data(-1, -1, -1, -1, [])
+ pars.append(Paragraph(data, self.__buf, par.getEnd()))
+ elif par.getText() == "\n":
+ data = Sentence_Data(-1, -1, -1, -1, -1, "\n", None)
+ pars[-2].sentences.append(Sentence(data, self.__buf, par.getStart()))
+ else:
+ data = Paragraph_Data(-1, -1, -1, -1, [])
+ pars.append(Paragraph(data, self.__buf, par.getEnd()))
+ sections.extend(section.checkIntegrity(self.__buf.get_end_iter()))
+
+ self.__sections = sections
+
+
+ startdummy = dummySection(self.__buf, self.__buf.get_start_iter(), True)
+ enddummy = dummySection(self.__buf, self.__buf.get_end_iter(), False)
+ self.__sections = [startdummy] + self.__sections + [enddummy]
+ self.generateIds()
+
+ i = 1
+ while i < len(self.__sections)-1:
+ j = 0
+ section = self.__sections[i]
+ while j < len(section.paragraphs) - 1:
+ k = 0
+ paragraph = section.paragraphs[j]
+ while k < len(paragraph.sentences) - 1:
+ sentence = paragraph.sentences[k]
+ if sentence.getStart().compare(sentence.getEnd()) > -1:
+ sentence.remove()
+ del paragraph.sentences[k]
+ k = k - 1
+ k = k+1
+ if paragraph.sentences == []:
+ del section.paragraphs[j]
+ j = j - 1
+ j = j+1
+ if section.paragraphs == []:
+ del self.__sections[i]
+ i = i - 1
+ i = i+1
+
+
+ def generateIds(self):
+ for section in self.__sections[1:len(self.__sections)-1]:
+ section.generateIds()
+
+
+ def insert(self, objects, lociter):
+ """
+ This method is used for inserting new sentences, paragraphs and/or sections into the article.
+
+ The position specified by lociter can be any location within the textbuffer.
+
+ Objects is a list of Section objects to be inserted into the article.
+
+ The list can also be prepended and appended by Paragraph objects, and then again by Sentence objects.
+
+ So, objects will be a list of the form:
+ [sentence objects] ++ [paragraph objects] ++ [section objects] ++ [paragraph objects] ++ [sentence objects]
+
+ If sections are being inserted, then the first sentence array and paragraph array will each contain a dummy object.
+
+ Likewise, if only paragraphs are being inserted, then the first sentence array will contain a dummy object.
+
+ The section objects array and the second paragraph and sentence arrays, can all be empty.
+ """
+
+ sectionnumber = self.__get_exact_section(lociter)
+ if sectionnumber == len(self.__sections)-1:
+ self.__pad()
+ lociter = self.__sections[-2].getStart()
+ section = self.__sections[sectionnumber]
+
+ extra = 0
+ secstart = section.getStart()
+ secend = section.getEnd()
+
+ if secstart.compare(lociter)==0 and (secend.get_offset() - secstart.get_offset()) < 4:
+ extra = 3
+ elif secend.get_offset() - lociter.get_offset() < 4:
+ extra = 3
+
+ paragraph = section.getParagraph(lociter)
+ if paragraph == section.getParagraphs()[-1]:
+ section.pad()
+ paragraph = section.getParagraphs()[-2]
+ lociter = paragraph.getStart()
+
+ insertioniter = paragraph.getBestSentence(lociter).getStart()
+ insertionmark = self.__buf.create_mark(None, insertioniter, False)
+
+ self.insertionsectionstart = self.__buf.create_mark(None, section.getStart(), True)
+ self.insertionsectionend = self.__buf.create_mark(None, section.getEnd(), False)
+ self.insertionstartdist = insertioniter.get_offset() - section.getStart().get_offset()
+ self.insertionenddist = section.getEnd().get_offset() - insertioniter.get_offset() - extra
+
+ split = False
+
+
+ if objects != []:
+ object = objects[0]
+
+ if object.type == "section":
+ del objects[0]
+ dummyparagraphdata = Paragraph_Data(id = -1, sentences_data = [])
+ objects = object.paragraphs_data + [dummyparagraphdata] + objects
+ object = objects[0]
+ if object.type == "paragraph":
+ del objects[0]
+ dummysentencedata = Sentence_Data(id = -1, text = "")
+ objects = object.sentences_data + [dummysentencedata] + objects
+ object = objects[0]
+
+
+
+ while objects != [] and (object.type == "sentence" or object.type == "picture"):
+ # it text = "" then we have reached the end of the first list and must break. We don't insert
+ # this blank sentence, it is just a placeholder
+ if object.text != "":
+ insertioniter = self.__buf.get_iter_at_mark(insertionmark)
+ paragraph.insertSentence(object, insertioniter)
+ else:
+ split = True
+ del objects[0]
+ break
+
+ del objects[0]
+ if objects != []:
+ object = objects[0]
+
+ splititer = self.__buf.get_iter_at_mark(insertionmark)
+ splitmark = self.__buf.create_mark(None, splititer, True)
+
+ if objects != []:
+ object = objects[-1]
+ while objects != [] and (object.type == "sentence" or object.type == "picture"):
+ # Now, we actually add the ending sentences first, then split the paragraph at the splitmark
+ # which was created between the two while loops
+ insertioniter = self.__buf.get_iter_at_mark(splitmark)
+ paragraph.insertSentence(object, insertioniter)
+
+ del objects[-1]
+ if objects != []:
+ object = objects[-1]
+
+
+ paragraph.clean()
+ section.clean()
+
+
+ # Now we simply split the paragraph at the splitmark, then call the insertparagraphs method with
+ # the remaining contents of objects
+ if split:
+ splititer = self.__buf.get_iter_at_mark(splitmark)
+ offset = splititer.get_offset()
+ section.splitParagraph(splititer)
+ insertioniter = self.__buf.get_iter_at_offset(offset)
+ if objects != []:
+ self.__insertParagraphs(objects, insertioniter)
+
+ self.highlightDragResult()
+
+ def __insertParagraphs(self, objects, lociter):
+ """
+ This method is the same as the above insert method, except that sentence objects are not included.
+
+ So, objects is a list which can take the form:
+ [ paragraph objects ] ++ [ section objects ] ++ [ paragraph objects ]
+
+ And again, if the objects list does contain sections, then the first paragraph array will end with a dummy paragraph object.
+ """
+
+
+ sectionnumber = self.__get_exact_section(lociter)
+ section = self.__sections[sectionnumber]
+ lociter = self.__buf.get_iter_at_offset(lociter.get_offset()+1)
+
+ insertioniter = section.getBestParagraph(lociter).getStart()
+ insertionmark = self.__buf.create_mark(None, insertioniter, False)
+
+ split = False
+
+ object = objects[0]
+
+ if object.type == "section":
+ del objects[0]
+ blankparagraph = Paragraph_Data(id = -1, sentences_data = [])
+ objects = object.paragraphs_data + [blankparagraph] + objects
+ object = objects[0]
+
+ while objects != [] and object.type == "paragraph":
+ # First, deal with the paragraph triples. We insert these into the current section.
+ # Then when we run out of paragraph triples, we split the section.
+
+ # if sentences = [] then we have reached the end of the first list and must break.
+ # We do not insert this empty paragraph, it is just a placeholder.
+ if object.sentences_data != []:
+ insertioniter = self.__buf.get_iter_at_mark(insertionmark)
+ section.insertParagraph(object, insertioniter)
+ else:
+ split = True
+ del objects[0]
+ break
+
+ del objects[0]
+ if objects != []:
+ object = objects[0]
+
+ splititer = self.__buf.get_iter_at_mark(insertionmark)
+ splitmark = self.__buf.create_mark(None, splititer, True)
+
+ if objects != []:
+ object = objects[-1]
+ while objects != [] and object.type == "paragraph":
+ # Now, we actually add the ending paragraphs, then split the section at the splitmark
+ # which was created between the two while loops
+ insertioniter = self.__buf.get_iter_at_mark(splitmark)
+ section.insertParagraph(object, insertioniter)
+
+ del objects[-1]
+ if objects != []:
+ object = objects[-1]
+
+
+ # Now we simply split the section at the splitmark, then call the insertsections method with
+ # the remaining contents of objects
+ if split:
+ splititer = self.__buf.get_iter_at_mark(splitmark)
+ offset = splititer.get_offset()
+ splititer = self.getParagraph(splititer).getStart()
+ self.__splitSection(splititer)
+ insertioniter = self.__buf.get_iter_at_offset(offset)
+ if objects != []:
+ self.__insertSections(objects, insertioniter)
+
+ def __insertSections(self, objects, lociter):
+ """
+ objects is a list of section objects, and lociter is a location in the textbuffer
+
+ We find the closest section gap to the lociter specified, and then insert the sections at this point.
+ """
+ insertioniter = self.getBestSection(lociter).getStart()
+ insertionmark = self.__buf.create_mark(None, insertioniter, False)
+ for object in objects:
+ insertioniter = self.__buf.get_iter_at_mark(insertionmark)
+ self.insertSection(object, insertioniter)
+
+ def getSelection(self):
+ """
+ If the user has highlighted some text, this method returns the sentence/paragraph/section based
+ representation of the selection
+ """
+ buf = self.__buf
+ bounds = buf.get_selection_bounds()
+ if bounds[0].compare(bounds[1]) == 1:
+ start = bounds[1]
+ end = bounds[0]
+ else:
+ start = bounds[0]
+ end = bounds[1]
+ data = self.getRange(start, end)
+ return data
+
+
+ def getRange(self, startiter, enditer):
+ """
+ This method returns the section, paragraph and sentence objects between startiter and enditer
+ """
+
+ startindex = self.__get_exact_section(startiter)
+ endindex = self.__get_exact_section(enditer)
+ if startindex == endindex:
+ data = self.__sections[startindex].getDataRange(startiter, enditer)
+ else:
+ startdata = []
+ startsection = self.__sections[startindex]
+ if startiter.compare(startsection.getStart()) == 0:
+ startdata.append(self.__sections[startindex].getData())
+ else:
+ startdata.extend(startsection.getDataRange(startiter, startsection.getEnd()))
+ startdata.append(Paragraph_Data(id = -1, sentences_data = []))
+
+ middledata = []
+ for section in self.__sections[startindex+1:endindex]:
+ middledata.append(section.getData())
+
+ enddata = []
+ if endindex != len(self.__sections):
+ endsection = self.__sections[endindex]
+ enddata.extend(endsection.getDataRange(endsection.getStart(), enditer))
+
+ data = startdata + middledata + enddata
+
+ return data
+
+
+
+ def getBuffer(self):
+ """
+ This method simply returns the gtk.TextBuffer being maintained by this instance of the Article class.
+ """
+ return self.__buf
+
+
+ def insertSection(self, section_data, lociter):
+ """
+ This method inserts a single section into the article.
+
+ The section is represented by section_data, and the insertion point is specified by lociter
+
+ The section is inserted into the closest gap to lociter.
+ """
+ insertionindex = self.__get_best_section(lociter)
+ if insertionindex == 0: insertionindex = insertionindex + 1
+ insertioniter = self.__sections[insertionindex].getStart()
+ section = Section(section_data, self.__buf, insertioniter)
+ self.__sections.insert(insertionindex, section)
+
+ def deleteSection(self, lociter):
+ """
+ This method deletes the section which contains lociter.
+ """
+ deletionindex = self.__get_exact_section(lociter)
+ if deletionindex != len(self.__sections) - 1:
+ section = self.__sections[deletionindex]
+ section.delete()
+ del self.__sections[deletionindex]
+
+ def removeSection(self, lociter):
+ """
+ This method has the same effect as deleteSection
+ """
+ removalindex = self.__get_exact_section(lociter)
+ section = self.__sections[removalindex]
+ section.delete()
+ del self.__sections[removalindex]
+
+ def deleteSelection(self, startiter, enditer):
+ """
+ This method deletes all sentence, paragraph and data objects from startiter to enditer.
+ """
+ startindex = self.__get_exact_section(startiter)
+ endindex = self.__get_exact_section(enditer)
+ if endindex == len(self.__sections) - 1:
+ endindex = endindex - 1
+ if startindex == endindex:
+ empty = self.__sections[startindex].deleteSelection(startiter, enditer)
+ if empty:
+ self.__sections[startindex].delete()
+ del self.__sections[startindex]
+ elif startindex < endindex:
+ startmark = self.__buf.create_mark(None, startiter, True)
+ endmark = self.__buf.create_mark(None, enditer, True)
+
+ endsection = self.__sections[endindex]
+ empty = endsection.deleteSelection(endsection.getStart(), self.__buf.get_iter_at_mark(endmark))
+ if empty:
+ self.__sections[endindex].delete()
+ del self.__sections[endindex]
+ self.__buf.delete_mark(endmark)
+
+ for i in range(startindex + 1, endindex):
+ self.__sections[startindex + 1].delete()
+ del self.__sections[startindex + 1]
+
+ startsection = self.__sections[startindex]
+ empty = startsection.deleteSelection(self.__buf.get_iter_at_mark(startmark), startsection.getEnd())
+ if empty:
+ self.__sections[startindex].delete()
+ del self.__sections[startindex]
+ self.__buf.delete_mark(startmark)
+
+ def rememberSelection(self):
+ """
+ This method is uses to remember a specific selection.
+
+ It is currently used to remember what text the user is dragging around within the article.
+ """
+ bounds = self.__buf.get_selection_bounds()
+ self.selectionlength = bounds[1].get_offset() - bounds[0].get_offset()
+ self.selectionstartoffset = bounds[0].get_offset()
+ self.selectionstartmark = self.__buf.create_mark(None, bounds[0], True)
+ self.selectionendmark = self.__buf.create_mark(None, bounds[1], True)
+
+ def deleteDragSelection(self):
+ """
+ This method deletes the selection which was saved by the rememberSelection method
+
+ This occurs when a user is rearranging text within the same article; the text will be inserted somewhere,
+ and then the old text will be deleted.
+ """
+
+ deletestart = self.__buf.get_iter_at_mark(self.selectionstartmark)
+ deletestartoffset = deletestart.get_offset()
+
+ if deletestart.get_offset() != self.selectionstartoffset:
+ deleteend = self.__buf.get_iter_at_mark(self.selectionendmark)
+ deletestart = self.__buf.get_iter_at_offset(deleteend.get_offset() - self.selectionlength)
+ else:
+ deleteend = self.__buf.get_iter_at_offset(deletestartoffset + self.selectionlength)
+
+ self.deleteSelection(deletestart, deleteend)
+ self.__buf.delete_mark(self.selectionstartmark)
+ self.__buf.delete_mark(self.selectionendmark)
+
+ def highlightDragResult(self):
+ """
+ When stuff is inserted into the article, the method that deals with the insertion keeps track of where it was inserted.
+
+ This method highlights the inserted text.
+ """
+ startoffset = self.__buf.get_iter_at_mark(self.insertionsectionstart).get_offset() + self.insertionstartdist
+ endoffset = self.__buf.get_iter_at_mark(self.insertionsectionend).get_offset() - self.insertionenddist
+ startiter = self.__buf.get_iter_at_offset(startoffset)
+ enditer = self.__buf.get_iter_at_offset(endoffset)
+ self.__buf.select_range(startiter, enditer)
+ self.__buf.delete_mark(self.insertionsectionstart)
+ self.__buf.delete_mark(self.insertionsectionend)
+
+ def __get_best_section(self, lociter):
+ """
+ Given any position within the buffer, this method determines where the closest section gap is.
+
+ It then returns the index of the section, within the self.__sections list, of the preceeding section.
+ """
+ sectionindex = self.__get_exact_section(lociter)
+ section = self.__sections[sectionindex]
+ left = section.getStart().get_offset()
+ middle = lociter.get_offset()
+ right = section.getEnd().get_offset()
+ leftdist = middle - left
+ rightdist = right - middle
+
+ if (sectionindex < len(self.__sections)) and (leftdist > rightdist):
+ sectionindex = sectionindex +1
+ return sectionindex
+
+ def __get_exact_section(self, lociter):
+ """
+ Given any position within the buffer, this method determines which section the lociter is inside.
+ """
+ i = 0
+ for i in range(len(self.__sections)-1):
+ start = self.__sections[i+1].getStart()
+ if lociter.compare(start) == -1:
+ return i
+ return len(self.__sections)-1
+
+ def highlight(self, startiter, enditer):
+ """
+ This method highlights the text between startiter and enditer.
+ """
+ comparison = startiter.compare(enditer)
+ if comparison == 0:
+ sentence = self.getSentence(startiter)
+ self.__buf.select_range(sentence.getStart(), sentence.getEnd())
+ else:
+ self.__buf.select_range(startiter, enditer)
+
+ def mark(self, lociter):
+ """
+ This method puts an arrow image at the start of sentence that lociter is within.
+ """
+ sentence = self.getSentence(lociter)
+ self.clearArrow()
+ lociter = sentence.getStart()
+ self.markmark = self.__buf.create_mark(None, lociter, True)
+ self.__buf.insert(lociter, " ")
+ lociter = self.__buf.get_iter_at_mark(self.markmark)
+ arrow = gtk.gdk.pixbuf_new_from_xpm_data(arrow_xpm)
+ self.__buf.insert_pixbuf(lociter, arrow)
+
+
+ def clearArrow(self):
+ """
+ This method removes the arrow image, if there is one.
+ """
+ if self.markmark == None:
+ return
+ markiter = self.__buf.get_iter_at_mark(self.markmark)
+ markenditer = self.__buf.get_iter_at_offset(markiter.get_offset()+2)
+ self.__buf.delete(markiter, markenditer)
+ self.__buf.delete_mark(self.markmark)
+ self.markmark = None
+
+ def getBestSentence(self, lociter):
+ """
+ This method finds the closest sentence gap to lociter.
+
+ It then returns the sentence object of the first sentence to occur after the gap.
+ """
+ paragraph = self.getParagraph(lociter)
+ sentence = paragraph.getBestSentence(lociter)
+ return sentence
+
+ def getBestParagraph(self, lociter):
+ """
+ This method finds the closest paragraph gap to lociter.
+
+ It then returns the paragraph object of the first paragraph to occur after the gap.
+ """
+ section = self.getSection(lociter)
+ paragraph = section.getBestParagraph(lociter)
+ return paragraph
+
+ def getBestSection(self, lociter):
+ """
+ This method finds the closest section gap to lociter.
+
+ It then returns the section object of the first section to occur after the gap.
+ """
+ sectionindex = self.__get_best_section(lociter)
+ if sectionindex == len(self.__sections):
+ return self.__sections[-1]
+ else:
+ return self.__sections[sectionindex]
+
+ def getSentence(self, lociter):
+ """
+ This method returns the sentence which contains lociter.
+ """
+ paragraph = self.getParagraph(lociter)
+ sentence = paragraph.getSentence(lociter)
+ return sentence
+
+ def getParagraph(self, lociter):
+ """
+ This method returns the paragraph which contains lociter.
+ """
+ section = self.getSection(lociter)
+ paragraph = section.getParagraph(lociter)
+ return paragraph
+
+ def getSection(self, lociter):
+ """
+ This method returns the section which contains lociter.
+ """
+ sectionindex = self.__get_exact_section(lociter)
+ section = self.__sections[sectionindex]
+ return section
+
+ def __splitSection(self, lociter):
+ """
+ This method finds the section which contains lociter.
+
+ It then finds the closest paragraph gap to lociter.
+
+ The section is then split into two sections, one containing all the paragraphs before the gap,
+ the other containing all the paragraphs after the gap.
+ """
+ sectionindex = self.__get_exact_section(lociter)
+ section = self.__sections[sectionindex]
+
+ source_article_id = section.source_article_id
+ source_section_id = section.source_section_id
+
+ offset = lociter.get_offset()
+ section.splitParagraph(lociter)
+ lociter = self.__buf.get_iter_at_offset(offset)
+
+
+ firstdata = section.getDataRange(section.getStart(), lociter)
+ seconddata = section.getDataRange(lociter, section.getEnd())
+ mark = self.__buf.create_mark(None, lociter, False)
+ if firstdata != [] and seconddata != []:
+ self.deleteSection(lociter)
+
+ insertioniter = self.__buf.get_iter_at_mark(mark)
+ sectiondata = Section_Data(None, source_article_id, source_section_id, firstdata)
+ section = Section(sectiondata, self.__buf, insertioniter)
+ self.__sections.insert(sectionindex, section)
+
+ insertioniter = self.__buf.get_iter_at_mark(mark)
+ sectiondata = Section_Data(None, source_article_id, source_section_id, seconddata)
+ section = Section(sectiondata, self.__buf, insertioniter)
+ self.__sections.insert(sectionindex+1, section)
+
+ def __pad(self):
+ """
+ This method adds an empty section at the end of the article.
+
+ It is currently used in preparation for something being inserted at the end of the article.
+ """
+ sentencedata = Sentence_Data(id = -1, text = " ")
+ paragraphdata = Paragraph_Data(id = -1, sentences_data = [sentencedata])
+ sectiondata = Section_Data(id = -1, paragraphs_data = [paragraphdata])
+ insertioniter = self.__sections[-1].getStart()
+ section = Section(sectiondata, self.__buf, insertioniter)
+ self.__sections.insert(-1, section)
+
+ def __clean(self):
+ """
+ Removes the effects of one use of pad.
+
+ If pad has been called more than once, then clean must be called the same number of times.
+ """
+ if len(self.__sections) > 2:
+ section = self.__sections[-2]
+ sectionisempty = section.clean()
+ if sectionisempty:
+ del self.__sections[-2]
+
diff --git a/infoslicer/processing/Article_Builder.py b/infoslicer/processing/Article_Builder.py new file mode 100644 index 0000000..6bfce47 --- /dev/null +++ b/infoslicer/processing/Article_Builder.py @@ -0,0 +1,242 @@ +# Copyright (C) IBM Corporation 2008
+
+from BeautifulSoup import Tag
+from NewtifulSoup import NewtifulStoneSoup as BeautifulStoneSoup
+from Article_Data import *
+import re
+import os
+import logging
+
+logger = logging.getLogger('infoslicer')
+
+"""
+Created by Christopher Leonard.
+
+ID descriptions:
+0 - picture
+1 - heading
+> 1 - anything
+
+This class converts between DITA and article_data representation of articles. Badly in need of refactoring!
+"""
+def get_article_from_dita(image_path, dita):
+ """
+ This method takes an article in DITA format as input, parses the DITA, and outputs the corresponding article_data object
+ """
+ has_shortdesc = False
+ input = BeautifulStoneSoup(dita)
+ article_id = input.resourceid['id']
+ current_section_id = ""
+ current_p_id = ""
+ sentence_data_list = []
+ paragraph_data_list = []
+ section_data_list = []
+ if input.find("shortdesc") != None:
+ paragraph_data=[]
+ for ph in input.shortdesc.findAll("ph"):
+ id = ph['id']
+ source_sentence_id = id
+ source_paragraph_id = "shortdesc"
+ source_section_id = "shortdesc"
+ source_article_id = article_id
+ text = ph.renderContents().replace("\n", "").replace("&#160;", "").strip() + " "
+ if text[0:5] == "Satur":
+ logger.debug(unicode(text))
+ sentence_data = Sentence_Data(id, source_article_id, source_section_id, source_paragraph_id, source_sentence_id, text)
+ sentence_data_list.append(sentence_data)
+ paragraph_data.append(Paragraph_Data("shortdesc", article_id, "shortdesc", "shortdesc", sentence_data_list))
+ section_data = Section_Data("shortdesc", article_id, "shortdesc", paragraph_data)
+ section_data_list.append(section_data)
+ sentence_data_list = []
+ input.shortdesc.extract()
+ has_shortdesc = True
+ taglist = input.findAll(re.compile("refbody|section|p|ph|image"))
+ for i in xrange(len(taglist)):
+ tag = taglist[len(taglist) - i - 1]
+ if tag.name == "ph":
+ id = tag['id']
+ source_sentence_id = id
+ source_paragraph_id = current_p_id
+ source_section_id = current_section_id
+ source_article_id = article_id
+ text = tag.renderContents().replace("\n", "").replace("&#160;", "").strip() + " "
+ sentence_data = Sentence_Data(id, source_article_id, source_section_id, source_paragraph_id, source_sentence_id, text)
+ sentence_data_list.insert(0, sentence_data)
+ elif tag.name == "p":
+ if not tag.has_key("id"):
+ id = -1
+ else:
+ id = tag['id']
+ source_paragraph_id = id
+ source_section_id = current_section_id
+ source_article_id = article_id
+ paragraph_data = Paragraph_Data(id, source_article_id, source_section_id, source_paragraph_id, sentence_data_list)
+ paragraph_data_list.insert(0, paragraph_data)
+ sentence_data_list = []
+ current_p_id = id
+ elif tag.name == "refbody" :
+ if tag.findParent("reference").has_key("id"):
+ id = "r" + tag.findParent("reference")['id']
+ else:
+ id = "r90000"
+ source_section_id = id
+ source_article_id = article_id
+ section_data = Section_Data(id, source_article_id, source_section_id, paragraph_data_list)
+ if has_shortdesc:
+ section_data_list.insert(1,section_data)
+ else:
+ section_data_list.insert(0,section_data)
+ if tag.findChild("title", recursive=False) != None:
+ heading = tag.findChild('title').renderContents().replace("\n", "").replace("&#160;", "").strip()
+ sen = Sentence_Data(1, source_article_id, source_section_id, 1, 1, heading)
+ par = Paragraph_Data(1, source_article_id, source_section_id, 1, [sen])
+ headingdata = Section_Data(1, source_article_id, source_section_id, [par])
+
+ if has_shortdesc:
+ section_data_list.insert(1,headingdata)
+ else:
+ section_data_list.insert(0,headingdata)
+ paragraph_data_list = []
+ current_section_id = tag.name[0] + id
+
+ elif tag.name == "section":
+ id = "s" + tag['id']
+ source_section_id = id
+ source_article_id = article_id
+
+ section_data = Section_Data(id, source_article_id, source_section_id, paragraph_data_list)
+ if has_shortdesc:
+ section_data_list.insert(1,section_data)
+ else:
+ section_data_list.insert(0,section_data)
+ if tag.findChild("title", recursive=False) != None:
+ heading = tag.findChild('title').renderContents().replace("\n", "").replace("&#160;", "").strip()
+ sen = Sentence_Data(1, source_article_id, source_section_id, 1, 1, heading)
+ par = Paragraph_Data(1, source_article_id, source_section_id, 1, [sen])
+ headingdata = Section_Data(1, source_article_id, source_section_id, [par])
+
+ if has_shortdesc:
+ section_data_list.insert(1,headingdata)
+ else:
+ section_data_list.insert(0,headingdata)
+ paragraph_data_list = []
+ current_section_id = id
+
+ elif tag.name == "image":
+
+ if tag.parent.name == "p":
+ source_article_id = article_id
+ text = image_path + '/' + tag['href']
+ if not os.path.exists(text):
+ logger.info('cannot find image %s' % text)
+ else:
+ picture_data = Picture_Data(source_article_id, text,
+ tag['orig_href'])
+ sentence_data_list.insert(0, picture_data)
+
+ article_title = input.find("title").renderContents().replace("\n", "").strip()
+
+ image_list = []
+ imglist_tag = input.find(True, attrs={"id" : "imagelist"})
+ if imglist_tag != None:
+ for img in imglist_tag.findAll("image"):
+ caption = img.findChild("alt")
+ if caption != None:
+ caption = caption.renderContents().replace("\n", "").strip()
+ else:
+ caption = ""
+ if not os.path.exists(os.path.join(image_path, img['href'])):
+ logger.info('cannot find image %s' % img['href'])
+ else:
+ image_list.append((img['href'], caption, img['orig_href']))
+
+ data = Article_Data(article_id, article_id, article_title, "theme", section_data_list, image_list)
+
+ return data
+
+
+def get_dita_from_article(image_path, article):
+ """
+ This method takes as input an instance of the Article class.
+ It calls the getData method of the article class to get the article_data representation of the article.
+ It then constructs the corresponding DITA representation of the article.
+ """
+ article_data = article.getData()
+ output = BeautifulStoneSoup("<?xml version='1.0' encoding='utf-8'?><!DOCTYPE reference PUBLIC \"-//IBM//DTD DITA IBM Reference//EN\" \"ibm-reference.dtd\"><reference><title>%s</title><prolog></prolog></reference>" % article_data.article_title)
+ current_ref = output.reference
+ current_title = None
+
+ for section in article_data.sections_data:
+ #headings check
+ if len(section.paragraphs_data) == 1 and len(section.paragraphs_data[0].sentences_data) == 1 and section.paragraphs_data[0].sentences_data[0].id == 1:
+ paragraph = section.paragraphs_data[0]
+ current_title = paragraph.sentences_data[0].text
+ elif str(section.id).startswith("r"):
+ reference_tag = _tag_generator(output, "reference", attrs=[("id", section.id.replace("r", ""))])
+ if current_title != None:
+ reference_tag.append(_tag_generator(output, "title", contents=current_title))
+ current_title = None
+ reference_tag.append(_tag_generator(output, "refbody"))
+ for paragraph in section.paragraphs_data:
+ if paragraph.id == "shortdesc":
+ paragraph_tag = _tag_generator(output, "shortdesc")
+ else:
+ paragraph_tag = _tag_generator(output, "p", attrs=[("id", str(paragraph.id))])
+ for sentence in paragraph.sentences_data:
+ ph_tag = _tag_generator(output, "ph", attrs=[("id", str(sentence.id))], contents = sentence.text)
+ paragraph_tag.append(ph_tag)
+ reference_tag.refbody.append(paragraph_tag)
+ output.reference.append(reference_tag)
+ current_ref = reference_tag.refbody
+ else:
+ if section.id == "shortdesc":
+ section_tag = _tag_generator(output, "section", attrs=[("id", "shortdesc")])
+ else:
+ section_tag = _tag_generator(output, "section", attrs=[("id", str(section.id).replace("s", ""))])
+ if current_title != None:
+ section_tag.append(_tag_generator(output, "title", contents=current_title))
+ current_title = None
+ for paragraph in section.paragraphs_data:
+ paragraph_tag = _tag_generator(output, "p", attrs=[("id", str(paragraph.id))])
+ for sentence in paragraph.sentences_data:
+ if sentence.type == "sentence":
+ ph_tag = _tag_generator(output, "ph", attrs=[("id", str(sentence.id))], contents = sentence.text)
+ paragraph_tag.append(ph_tag)
+ elif sentence.type == "picture":
+ # switch image to relative path
+ text = sentence.text.replace(image_path, '') \
+ .lstrip('/')
+ image_tag = _tag_generator(output,
+ "image", attrs=[("href", text),
+ ('orig_href', sentence.orig)])
+ paragraph_tag.append(image_tag)
+ else:
+ logger.ebiug(sentence.type)
+
+ section_tag.append(paragraph_tag)
+ current_ref.append(section_tag)
+ if current_title != None:
+ current_ref.append('<section id="56756757"><p id="6875534"><ph id="65657657">%s</ph></p></section>' % current_title)
+ current_title = None
+ if article_data.image_list != []:
+ for unnecessary_tag in output.findAll(True, attrs={"id" : "imagelist"}):
+ unnecessary_tag.extract()
+ image_list = _tag_generator(output, "reference", [("id", "imagelist")])
+ output.reference.append(image_list)
+ image_list_body = _tag_generator(output, "refbody")
+ image_list.append(image_list_body)
+ for image in article_data.image_list:
+ image_tag = _tag_generator(output, "image", [("href", image[0]), ("orig_href", image[2])], "<alt>" + image[-1] + "</alt>")
+ image_list_body.append(image_tag)
+ dita = output.prettify()
+
+ return dita
+
+def _tag_generator(soup, name, attrs=[], contents=None):
+ if attrs != []:
+ new_tag = Tag(soup, name, attrs)
+ else:
+ new_tag = Tag(soup, name)
+ if contents != None:
+ new_tag.insert(0, contents)
+ return new_tag
diff --git a/infoslicer/processing/Article_Data.py b/infoslicer/processing/Article_Data.py new file mode 100644 index 0000000..042d9d5 --- /dev/null +++ b/infoslicer/processing/Article_Data.py @@ -0,0 +1,79 @@ +# Copyright (C) IBM Corporation 2008
+
+import random
+
+"""
+Created by Jonathan Mace
+
+Each class in this file represents the data associated with an element of an article.
+
+These are the data objects which are passed around to and from the Article class.
+"""
+
+class Sentence_Data:
+
+ def __init__(self, id = None, source_article_id = -1, source_section_id = -1, source_paragraph_id = -1, source_sentence_id = -1, text = "", formatting = None):
+ if id == None:
+ self.id = random.randint(100, 100000)
+ else:
+ self.id = id
+ self.source_article_id = source_article_id
+ self.source_section_id = source_section_id
+ self.source_paragraph_id = source_paragraph_id
+ self.source_sentence_id = source_sentence_id
+ self.text = text
+ self.formatting = formatting
+ self.type = "sentence"
+
+class Picture_Data:
+
+ def __init__(self, source_article_id = -1, text = None, orig=None):
+ self.source_article_id = source_article_id
+ self.id = 0
+ self.text = text
+ self.type = "picture"
+ self.orig = orig
+
+
+class Paragraph_Data:
+
+ def __init__(self, id = None, source_article_id = -1, source_section_id = -1, source_paragraph_id = -1, sentences_data = []):
+ if id == None:
+ self.id = random.randint(100, 100000)
+ else:
+ self.id = id
+ self.source_article_id = source_article_id
+ self.source_section_id = source_section_id
+ self.source_paragraph_id = source_paragraph_id
+ self.sentences_data = sentences_data
+ self.type = "paragraph"
+
+class Section_Data:
+
+ def __init__(self, id = None, source_article_id = -1, source_section_id = -1, paragraphs_data = []):
+ if id == None:
+ self.id = random.randint(100, 100000)
+ else:
+ self.id = id
+ self.source_article_id = source_article_id
+ self.source_section_id = source_section_id
+ self.paragraphs_data = paragraphs_data
+ self.type = "section"
+
+class Article_Data:
+
+ def __init__(self, id = None, source_article_id = -1, article_title = None, article_theme = None, sections_data = [], image_list=[]):
+ if id == None:
+ self.id = random.randint(100, 100000)
+ else:
+ self.id = id
+ self.source_article_id = source_article_id
+ self.article_title = article_title
+ self.article_theme = article_theme
+ self.sections_data = sections_data
+ self.type = "article"
+ self.image_list = image_list
+
+ def get_image_list(self):
+ return self.image_list
+
diff --git a/infoslicer/processing/BeautifulSoup.py b/infoslicer/processing/BeautifulSoup.py new file mode 100644 index 0000000..b2a78e3 --- /dev/null +++ b/infoslicer/processing/BeautifulSoup.py @@ -0,0 +1,2002 @@ +"""Beautiful Soup +Elixir and Tonic +"The Screen-Scraper's Friend" +http://www.crummy.com/software/BeautifulSoup/ + +Beautiful Soup parses a (possibly invalid) XML or HTML document into a +tree representation. It provides methods and Pythonic idioms that make +it easy to navigate, search, and modify the tree. + +A well-formed XML/HTML document yields a well-formed data +structure. An ill-formed XML/HTML document yields a correspondingly +ill-formed data structure. If your document is only locally +well-formed, you can use this library to find and process the +well-formed part of it. + +Beautiful Soup works with Python 2.2 and up. It has no external +dependencies, but you'll have more success at converting data to UTF-8 +if you also install these three packages: + +* chardet, for auto-detecting character encodings + http://chardet.feedparser.org/ +* cjkcodecs and iconv_codec, which add more encodings to the ones supported + by stock Python. + http://cjkpython.i18n.org/ + +Beautiful Soup defines classes for two main parsing strategies: + + * BeautifulStoneSoup, for parsing XML, SGML, or your domain-specific + language that kind of looks like XML. + + * BeautifulSoup, for parsing run-of-the-mill HTML code, be it valid + or invalid. This class has web browser-like heuristics for + obtaining a sensible parse tree in the face of common HTML errors. + +Beautiful Soup also defines a class (UnicodeDammit) for autodetecting +the encoding of an HTML or XML document, and converting it to +Unicode. Much of this code is taken from Mark Pilgrim's Universal Feed Parser. + +For more than you ever wanted to know about Beautiful Soup, see the +documentation: +http://www.crummy.com/software/BeautifulSoup/documentation.html + +Here, have some legalese: + +Copyright (c) 2004-2009, Leonard Richardson + +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + * Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. + + * Neither the name of the the Beautiful Soup Consortium and All + Night Kosher Bakery nor the names of its contributors may be + used to endorse or promote products derived from this software + without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR +CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, +EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, +PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR +PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING +NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE, DAMMIT. + +""" +from __future__ import generators + +__author__ = "Leonard Richardson (leonardr@segfault.org)" +__version__ = "3.1.0.1" +__copyright__ = "Copyright (c) 2004-2009 Leonard Richardson" +__license__ = "New-style BSD" + +import codecs +import markupbase +import types +import re +from HTMLParser import HTMLParser, HTMLParseError +try: + from htmlentitydefs import name2codepoint +except ImportError: + name2codepoint = {} +try: + set +except NameError: + from sets import Set as set + +#These hacks make Beautiful Soup able to parse XML with namespaces +markupbase._declname_match = re.compile(r'[a-zA-Z][-_.:a-zA-Z0-9]*\s*').match + +DEFAULT_OUTPUT_ENCODING = "utf-8" + +# First, the classes that represent markup elements. + +def sob(unicode, encoding): + """Returns either the given Unicode string or its encoding.""" + if encoding is None: + return unicode + else: + return unicode.encode(encoding) + +class PageElement: + """Contains the navigational information for some part of the page + (either a tag or a piece of text)""" + + def setup(self, parent=None, previous=None): + """Sets up the initial relations between this element and + other elements.""" + self.parent = parent + self.previous = previous + self.next = None + self.previousSibling = None + self.nextSibling = None + if self.parent and self.parent.contents: + self.previousSibling = self.parent.contents[-1] + self.previousSibling.nextSibling = self + + def replaceWith(self, replaceWith): + oldParent = self.parent + myIndex = self.parent.contents.index(self) + if hasattr(replaceWith, 'parent') and replaceWith.parent == self.parent: + # We're replacing this element with one of its siblings. + index = self.parent.contents.index(replaceWith) + if index and index < myIndex: + # Furthermore, it comes before this element. That + # means that when we extract it, the index of this + # element will change. + myIndex = myIndex - 1 + self.extract() + oldParent.insert(myIndex, replaceWith) + + def extract(self): + """Destructively rips this element out of the tree.""" + if self.parent: + try: + self.parent.contents.remove(self) + except ValueError: + pass + + #Find the two elements that would be next to each other if + #this element (and any children) hadn't been parsed. Connect + #the two. + lastChild = self._lastRecursiveChild() + nextElement = lastChild.next + + if self.previous: + self.previous.next = nextElement + if nextElement: + nextElement.previous = self.previous + self.previous = None + lastChild.next = None + + self.parent = None + if self.previousSibling: + self.previousSibling.nextSibling = self.nextSibling + if self.nextSibling: + self.nextSibling.previousSibling = self.previousSibling + self.previousSibling = self.nextSibling = None + return self + + def _lastRecursiveChild(self): + "Finds the last element beneath this object to be parsed." + lastChild = self + while hasattr(lastChild, 'contents') and lastChild.contents: + lastChild = lastChild.contents[-1] + return lastChild + + def insert(self, position, newChild): + if (isinstance(newChild, basestring) + or isinstance(newChild, unicode)) \ + and not isinstance(newChild, NavigableString): + newChild = NavigableString(newChild) + + position = min(position, len(self.contents)) + if hasattr(newChild, 'parent') and newChild.parent != None: + # We're 'inserting' an element that's already one + # of this object's children. + if newChild.parent == self: + index = self.find(newChild) + if index and index < position: + # Furthermore we're moving it further down the + # list of this object's children. That means that + # when we extract this element, our target index + # will jump down one. + position = position - 1 + newChild.extract() + + newChild.parent = self + previousChild = None + if position == 0: + newChild.previousSibling = None + newChild.previous = self + else: + previousChild = self.contents[position-1] + newChild.previousSibling = previousChild + newChild.previousSibling.nextSibling = newChild + newChild.previous = previousChild._lastRecursiveChild() + if newChild.previous: + newChild.previous.next = newChild + + newChildsLastElement = newChild._lastRecursiveChild() + + if position >= len(self.contents): + newChild.nextSibling = None + + parent = self + parentsNextSibling = None + while not parentsNextSibling: + parentsNextSibling = parent.nextSibling + parent = parent.parent + if not parent: # This is the last element in the document. + break + if parentsNextSibling: + newChildsLastElement.next = parentsNextSibling + else: + newChildsLastElement.next = None + else: + nextChild = self.contents[position] + newChild.nextSibling = nextChild + if newChild.nextSibling: + newChild.nextSibling.previousSibling = newChild + newChildsLastElement.next = nextChild + + if newChildsLastElement.next: + newChildsLastElement.next.previous = newChildsLastElement + self.contents.insert(position, newChild) + + def append(self, tag): + """Appends the given tag to the contents of this tag.""" + self.insert(len(self.contents), tag) + + def findNext(self, name=None, attrs={}, text=None, **kwargs): + """Returns the first item that matches the given criteria and + appears after this Tag in the document.""" + return self._findOne(self.findAllNext, name, attrs, text, **kwargs) + + def findAllNext(self, name=None, attrs={}, text=None, limit=None, + **kwargs): + """Returns all items that match the given criteria and appear + after this Tag in the document.""" + return self._findAll(name, attrs, text, limit, self.nextGenerator, + **kwargs) + + def findNextSibling(self, name=None, attrs={}, text=None, **kwargs): + """Returns the closest sibling to this Tag that matches the + given criteria and appears after this Tag in the document.""" + return self._findOne(self.findNextSiblings, name, attrs, text, + **kwargs) + + def findNextSiblings(self, name=None, attrs={}, text=None, limit=None, + **kwargs): + """Returns the siblings of this Tag that match the given + criteria and appear after this Tag in the document.""" + return self._findAll(name, attrs, text, limit, + self.nextSiblingGenerator, **kwargs) + fetchNextSiblings = findNextSiblings # Compatibility with pre-3.x + + def findPrevious(self, name=None, attrs={}, text=None, **kwargs): + """Returns the first item that matches the given criteria and + appears before this Tag in the document.""" + return self._findOne(self.findAllPrevious, name, attrs, text, **kwargs) + + def findAllPrevious(self, name=None, attrs={}, text=None, limit=None, + **kwargs): + """Returns all items that match the given criteria and appear + before this Tag in the document.""" + return self._findAll(name, attrs, text, limit, self.previousGenerator, + **kwargs) + fetchPrevious = findAllPrevious # Compatibility with pre-3.x + + def findPreviousSibling(self, name=None, attrs={}, text=None, **kwargs): + """Returns the closest sibling to this Tag that matches the + given criteria and appears before this Tag in the document.""" + return self._findOne(self.findPreviousSiblings, name, attrs, text, + **kwargs) + + def findPreviousSiblings(self, name=None, attrs={}, text=None, + limit=None, **kwargs): + """Returns the siblings of this Tag that match the given + criteria and appear before this Tag in the document.""" + return self._findAll(name, attrs, text, limit, + self.previousSiblingGenerator, **kwargs) + fetchPreviousSiblings = findPreviousSiblings # Compatibility with pre-3.x + + def findParent(self, name=None, attrs={}, **kwargs): + """Returns the closest parent of this Tag that matches the given + criteria.""" + # NOTE: We can't use _findOne because findParents takes a different + # set of arguments. + r = None + l = self.findParents(name, attrs, 1) + if l: + r = l[0] + return r + + def findParents(self, name=None, attrs={}, limit=None, **kwargs): + """Returns the parents of this Tag that match the given + criteria.""" + + return self._findAll(name, attrs, None, limit, self.parentGenerator, + **kwargs) + fetchParents = findParents # Compatibility with pre-3.x + + #These methods do the real heavy lifting. + + def _findOne(self, method, name, attrs, text, **kwargs): + r = None + l = method(name, attrs, text, 1, **kwargs) + if l: + r = l[0] + return r + + def _findAll(self, name, attrs, text, limit, generator, **kwargs): + "Iterates over a generator looking for things that match." + + if isinstance(name, SoupStrainer): + strainer = name + else: + # Build a SoupStrainer + strainer = SoupStrainer(name, attrs, text, **kwargs) + results = ResultSet(strainer) + g = generator() + while True: + try: + i = g.next() + except StopIteration: + break + if i: + found = strainer.search(i) + if found: + results.append(found) + if limit and len(results) >= limit: + break + return results + + #These Generators can be used to navigate starting from both + #NavigableStrings and Tags. + def nextGenerator(self): + i = self + while i: + i = i.next + yield i + + def nextSiblingGenerator(self): + i = self + while i: + i = i.nextSibling + yield i + + def previousGenerator(self): + i = self + while i: + i = i.previous + yield i + + def previousSiblingGenerator(self): + i = self + while i: + i = i.previousSibling + yield i + + def parentGenerator(self): + i = self + while i: + i = i.parent + yield i + + # Utility methods + def substituteEncoding(self, str, encoding=None): + encoding = encoding or "utf-8" + return str.replace("%SOUP-ENCODING%", encoding) + + def toEncoding(self, s, encoding=None): + """Encodes an object to a string in some encoding, or to Unicode. + .""" + if isinstance(s, unicode): + if encoding: + s = s.encode(encoding) + elif isinstance(s, str): + if encoding: + s = s.encode(encoding) + else: + s = unicode(s) + else: + if encoding: + s = self.toEncoding(str(s), encoding) + else: + s = unicode(s) + return s + +class NavigableString(unicode, PageElement): + + def __new__(cls, value): + """Create a new NavigableString. + + When unpickling a NavigableString, this method is called with + the string in DEFAULT_OUTPUT_ENCODING. That encoding needs to be + passed in to the superclass's __new__ or the superclass won't know + how to handle non-ASCII characters. + """ + if isinstance(value, unicode): + return unicode.__new__(cls, value) + return unicode.__new__(cls, value, DEFAULT_OUTPUT_ENCODING) + + def __getnewargs__(self): + return (unicode(self),) + + def __getattr__(self, attr): + """text.string gives you text. This is for backwards + compatibility for Navigable*String, but for CData* it lets you + get the string without the CData wrapper.""" + if attr == 'string': + return self + else: + raise AttributeError, "'%s' object has no attribute '%s'" % (self.__class__.__name__, attr) + + def encode(self, encoding=DEFAULT_OUTPUT_ENCODING): + return self.decode().encode(encoding) + + def decodeGivenEventualEncoding(self, eventualEncoding): + return self + +class CData(NavigableString): + + def decodeGivenEventualEncoding(self, eventualEncoding): + return u'<![CDATA[' + self + u']]>' + +class ProcessingInstruction(NavigableString): + + def decodeGivenEventualEncoding(self, eventualEncoding): + output = self + if u'%SOUP-ENCODING%' in output: + output = self.substituteEncoding(output, eventualEncoding) + return u'<?' + output + u'?>' + +class Comment(NavigableString): + def decodeGivenEventualEncoding(self, eventualEncoding): + return u'<!--' + self + u'-->' + +class Declaration(NavigableString): + def decodeGivenEventualEncoding(self, eventualEncoding): + return u'<!' + self + u'>' + +class Tag(PageElement): + + """Represents a found HTML tag with its attributes and contents.""" + + def _invert(h): + "Cheap function to invert a hash." + i = {} + for k,v in h.items(): + i[v] = k + return i + + XML_ENTITIES_TO_SPECIAL_CHARS = { "apos" : "'", + "quot" : '"', + "amp" : "&", + "lt" : "<", + "gt" : ">" } + + XML_SPECIAL_CHARS_TO_ENTITIES = _invert(XML_ENTITIES_TO_SPECIAL_CHARS) + + def _convertEntities(self, match): + """Used in a call to re.sub to replace HTML, XML, and numeric + entities with the appropriate Unicode characters. If HTML + entities are being converted, any unrecognized entities are + escaped.""" + x = match.group(1) + if self.convertHTMLEntities and x in name2codepoint: + return unichr(name2codepoint[x]) + elif x in self.XML_ENTITIES_TO_SPECIAL_CHARS: + if self.convertXMLEntities: + return self.XML_ENTITIES_TO_SPECIAL_CHARS[x] + else: + return u'&%s;' % x + elif len(x) > 0 and x[0] == '#': + # Handle numeric entities + if len(x) > 1 and x[1] == 'x': + return unichr(int(x[2:], 16)) + else: + return unichr(int(x[1:])) + + elif self.escapeUnrecognizedEntities: + return u'&%s;' % x + else: + return u'&%s;' % x + + def __init__(self, parser, name, attrs=None, parent=None, + previous=None): + "Basic constructor." + + # We don't actually store the parser object: that lets extracted + # chunks be garbage-collected + self.parserClass = parser.__class__ + self.isSelfClosing = parser.isSelfClosingTag(name) + self.name = name + if attrs == None: + attrs = [] + self.attrs = attrs + self.contents = [] + self.setup(parent, previous) + self.hidden = False + self.containsSubstitutions = False + self.convertHTMLEntities = parser.convertHTMLEntities + self.convertXMLEntities = parser.convertXMLEntities + self.escapeUnrecognizedEntities = parser.escapeUnrecognizedEntities + + def convert(kval): + "Converts HTML, XML and numeric entities in the attribute value." + k, val = kval + if val is None: + return kval + return (k, re.sub("&(#\d+|#x[0-9a-fA-F]+|\w+);", + self._convertEntities, val)) + self.attrs = map(convert, self.attrs) + + def get(self, key, default=None): + """Returns the value of the 'key' attribute for the tag, or + the value given for 'default' if it doesn't have that + attribute.""" + return self._getAttrMap().get(key, default) + + def has_key(self, key): + return self._getAttrMap().has_key(key) + + def __getitem__(self, key): + """tag[key] returns the value of the 'key' attribute for the tag, + and throws an exception if it's not there.""" + return self._getAttrMap()[key] + + def __iter__(self): + "Iterating over a tag iterates over its contents." + return iter(self.contents) + + def __len__(self): + "The length of a tag is the length of its list of contents." + return len(self.contents) + + def __contains__(self, x): + return x in self.contents + + def __nonzero__(self): + "A tag is non-None even if it has no contents." + return True + + def __setitem__(self, key, value): + """Setting tag[key] sets the value of the 'key' attribute for the + tag.""" + self._getAttrMap() + self.attrMap[key] = value + found = False + for i in range(0, len(self.attrs)): + if self.attrs[i][0] == key: + self.attrs[i] = (key, value) + found = True + if not found: + self.attrs.append((key, value)) + self._getAttrMap()[key] = value + + def __delitem__(self, key): + "Deleting tag[key] deletes all 'key' attributes for the tag." + for item in self.attrs: + if item[0] == key: + self.attrs.remove(item) + #We don't break because bad HTML can define the same + #attribute multiple times. + self._getAttrMap() + if self.attrMap.has_key(key): + del self.attrMap[key] + + def __call__(self, *args, **kwargs): + """Calling a tag like a function is the same as calling its + findAll() method. Eg. tag('a') returns a list of all the A tags + found within this tag.""" + return apply(self.findAll, args, kwargs) + + def __getattr__(self, tag): + #print "Getattr %s.%s" % (self.__class__, tag) + if len(tag) > 3 and tag.rfind('Tag') == len(tag)-3: + return self.find(tag[:-3]) + elif tag.find('__') != 0: + return self.find(tag) + raise AttributeError, "'%s' object has no attribute '%s'" % (self.__class__, tag) + + def __eq__(self, other): + """Returns true iff this tag has the same name, the same attributes, + and the same contents (recursively) as the given tag. + + NOTE: right now this will return false if two tags have the + same attributes in a different order. Should this be fixed?""" + if not hasattr(other, 'name') or not hasattr(other, 'attrs') or not hasattr(other, 'contents') or self.name != other.name or self.attrs != other.attrs or len(self) != len(other): + return False + for i in range(0, len(self.contents)): + if self.contents[i] != other.contents[i]: + return False + return True + + def __ne__(self, other): + """Returns true iff this tag is not identical to the other tag, + as defined in __eq__.""" + return not self == other + + def __repr__(self, encoding=DEFAULT_OUTPUT_ENCODING): + """Renders this tag as a string.""" + return self.decode(eventualEncoding=encoding) + + BARE_AMPERSAND_OR_BRACKET = re.compile("([<>]|" + + "&(?!#\d+;|#x[0-9a-fA-F]+;|\w+;)" + + ")") + + def _sub_entity(self, x): + """Used with a regular expression to substitute the + appropriate XML entity for an XML special character.""" + return "&" + self.XML_SPECIAL_CHARS_TO_ENTITIES[x.group(0)[0]] + ";" + + def __unicode__(self): + return self.decode() + + def __str__(self): + return self.encode() + + def encode(self, encoding=DEFAULT_OUTPUT_ENCODING, + prettyPrint=False, indentLevel=0): + return self.decode(prettyPrint, indentLevel, encoding).encode(encoding) + + def decode(self, prettyPrint=False, indentLevel=0, + eventualEncoding=DEFAULT_OUTPUT_ENCODING): + """Returns a string or Unicode representation of this tag and + its contents. To get Unicode, pass None for encoding.""" + + attrs = [] + if self.attrs: + for key, val in self.attrs: + fmt = '%s="%s"' + if isString(val): + if (self.containsSubstitutions + and eventualEncoding is not None + and '%SOUP-ENCODING%' in val): + val = self.substituteEncoding(val, eventualEncoding) + + # The attribute value either: + # + # * Contains no embedded double quotes or single quotes. + # No problem: we enclose it in double quotes. + # * Contains embedded single quotes. No problem: + # double quotes work here too. + # * Contains embedded double quotes. No problem: + # we enclose it in single quotes. + # * Embeds both single _and_ double quotes. This + # can't happen naturally, but it can happen if + # you modify an attribute value after parsing + # the document. Now we have a bit of a + # problem. We solve it by enclosing the + # attribute in single quotes, and escaping any + # embedded single quotes to XML entities. + if '"' in val: + fmt = "%s='%s'" + if "'" in val: + # TODO: replace with apos when + # appropriate. + val = val.replace("'", "&squot;") + + # Now we're okay w/r/t quotes. But the attribute + # value might also contain angle brackets, or + # ampersands that aren't part of entities. We need + # to escape those to XML entities too. + val = self.BARE_AMPERSAND_OR_BRACKET.sub(self._sub_entity, val) + if val is None: + # Handle boolean attributes. + decoded = key + else: + decoded = fmt % (key, val) + attrs.append(decoded) + close = '' + closeTag = '' + if self.isSelfClosing: + close = ' /' + else: + closeTag = '</%s>' % self.name + + indentTag, indentContents = 0, 0 + if prettyPrint: + indentTag = indentLevel + space = (' ' * (indentTag-1)) + indentContents = indentTag + 1 + contents = self.decodeContents(prettyPrint, indentContents, + eventualEncoding) + if self.hidden: + s = contents + else: + s = [] + attributeString = '' + if attrs: + attributeString = ' ' + ' '.join(attrs) + if prettyPrint: + s.append(space) + s.append('<%s%s%s>' % (self.name, attributeString, close)) + if prettyPrint: + s.append("\n") + s.append(contents) + if prettyPrint and contents and contents[-1] != "\n": + s.append("\n") + if prettyPrint and closeTag: + s.append(space) + s.append(closeTag) + if prettyPrint and closeTag and self.nextSibling: + s.append("\n") + s = ''.join(s) + return s + + def decompose(self): + """Recursively destroys the contents of this tree.""" + contents = [i for i in self.contents] + for i in contents: + if isinstance(i, Tag): + i.decompose() + else: + i.extract() + self.extract() + + def prettify(self, encoding=DEFAULT_OUTPUT_ENCODING): + return self.encode(encoding, True) + + def encodeContents(self, encoding=DEFAULT_OUTPUT_ENCODING, + prettyPrint=False, indentLevel=0): + return self.decodeContents(prettyPrint, indentLevel).encode(encoding) + + def decodeContents(self, prettyPrint=False, indentLevel=0, + eventualEncoding=DEFAULT_OUTPUT_ENCODING): + """Renders the contents of this tag as a string in the given + encoding. If encoding is None, returns a Unicode string..""" + s=[] + for c in self: + text = None + if isinstance(c, NavigableString): + text = c.decodeGivenEventualEncoding(eventualEncoding) + elif isinstance(c, Tag): + s.append(c.decode(prettyPrint, indentLevel, eventualEncoding)) + if text and prettyPrint: + text = text.strip() + if text: + if prettyPrint: + s.append(" " * (indentLevel-1)) + s.append(text) + if prettyPrint: + s.append("\n") + return ''.join(s) + + #Soup methods + + def find(self, name=None, attrs={}, recursive=True, text=None, + **kwargs): + """Return only the first child of this Tag matching the given + criteria.""" + r = None + l = self.findAll(name, attrs, recursive, text, 1, **kwargs) + if l: + r = l[0] + return r + findChild = find + + def findAll(self, name=None, attrs={}, recursive=True, text=None, + limit=None, **kwargs): + """Extracts a list of Tag objects that match the given + criteria. You can specify the name of the Tag and any + attributes you want the Tag to have. + + The value of a key-value pair in the 'attrs' map can be a + string, a list of strings, a regular expression object, or a + callable that takes a string and returns whether or not the + string matches for some custom definition of 'matches'. The + same is true of the tag name.""" + generator = self.recursiveChildGenerator + if not recursive: + generator = self.childGenerator + return self._findAll(name, attrs, text, limit, generator, **kwargs) + findChildren = findAll + + # Pre-3.x compatibility methods. Will go away in 4.0. + first = find + fetch = findAll + + def fetchText(self, text=None, recursive=True, limit=None): + return self.findAll(text=text, recursive=recursive, limit=limit) + + def firstText(self, text=None, recursive=True): + return self.find(text=text, recursive=recursive) + + # 3.x compatibility methods. Will go away in 4.0. + def renderContents(self, encoding=DEFAULT_OUTPUT_ENCODING, + prettyPrint=False, indentLevel=0): + if encoding is None: + return self.decodeContents(prettyPrint, indentLevel, encoding) + else: + return self.encodeContents(encoding, prettyPrint, indentLevel) + + + #Private methods + + def _getAttrMap(self): + """Initializes a map representation of this tag's attributes, + if not already initialized.""" + if not getattr(self, 'attrMap'): + self.attrMap = {} + for (key, value) in self.attrs: + self.attrMap[key] = value + return self.attrMap + + #Generator methods + def recursiveChildGenerator(self): + if not len(self.contents): + raise StopIteration + stopNode = self._lastRecursiveChild().next + current = self.contents[0] + while current is not stopNode: + if not current: + break + yield current + current = current.next + + def childGenerator(self): + if not len(self.contents): + raise StopIteration + current = self.contents[0] + while current: + yield current + current = current.nextSibling + raise StopIteration + +# Next, a couple classes to represent queries and their results. +class SoupStrainer: + """Encapsulates a number of ways of matching a markup element (tag or + text).""" + + def __init__(self, name=None, attrs={}, text=None, **kwargs): + self.name = name + if isString(attrs): + kwargs['class'] = attrs + attrs = None + if kwargs: + if attrs: + attrs = attrs.copy() + attrs.update(kwargs) + else: + attrs = kwargs + self.attrs = attrs + self.text = text + + def __str__(self): + if self.text: + return self.text + else: + return "%s|%s" % (self.name, self.attrs) + + def searchTag(self, markupName=None, markupAttrs={}): + found = None + markup = None + if isinstance(markupName, Tag): + markup = markupName + markupAttrs = markup + callFunctionWithTagData = callable(self.name) \ + and not isinstance(markupName, Tag) + + if (not self.name) \ + or callFunctionWithTagData \ + or (markup and self._matches(markup, self.name)) \ + or (not markup and self._matches(markupName, self.name)): + if callFunctionWithTagData: + match = self.name(markupName, markupAttrs) + else: + match = True + markupAttrMap = None + for attr, matchAgainst in self.attrs.items(): + if not markupAttrMap: + if hasattr(markupAttrs, 'get'): + markupAttrMap = markupAttrs + else: + markupAttrMap = {} + for k,v in markupAttrs: + markupAttrMap[k] = v + attrValue = markupAttrMap.get(attr) + if not self._matches(attrValue, matchAgainst): + match = False + break + if match: + if markup: + found = markup + else: + found = markupName + return found + + def search(self, markup): + #print 'looking for %s in %s' % (self, markup) + found = None + # If given a list of items, scan it for a text element that + # matches. + if isList(markup) and not isinstance(markup, Tag): + for element in markup: + if isinstance(element, NavigableString) \ + and self.search(element): + found = element + break + # If it's a Tag, make sure its name or attributes match. + # Don't bother with Tags if we're searching for text. + elif isinstance(markup, Tag): + if not self.text: + found = self.searchTag(markup) + # If it's text, make sure the text matches. + elif isinstance(markup, NavigableString) or \ + isString(markup): + if self._matches(markup, self.text): + found = markup + else: + raise Exception, "I don't know how to match against a %s" \ + % markup.__class__ + return found + + def _matches(self, markup, matchAgainst): + #print "Matching %s against %s" % (markup, matchAgainst) + result = False + if matchAgainst == True and type(matchAgainst) == types.BooleanType: + result = markup != None + elif callable(matchAgainst): + result = matchAgainst(markup) + else: + #Custom match methods take the tag as an argument, but all + #other ways of matching match the tag name as a string. + if isinstance(markup, Tag): + markup = markup.name + if markup is not None and not isString(markup): + markup = unicode(markup) + #Now we know that chunk is either a string, or None. + if hasattr(matchAgainst, 'match'): + # It's a regexp object. + result = markup and matchAgainst.search(markup) + elif (isList(matchAgainst) + and (markup is not None or not isString(matchAgainst))): + result = markup in matchAgainst + elif hasattr(matchAgainst, 'items'): + result = markup.has_key(matchAgainst) + elif matchAgainst and isString(markup): + if isinstance(markup, unicode): + matchAgainst = unicode(matchAgainst) + else: + matchAgainst = str(matchAgainst) + + if not result: + result = matchAgainst == markup + return result + +class ResultSet(list): + """A ResultSet is just a list that keeps track of the SoupStrainer + that created it.""" + def __init__(self, source): + list.__init__([]) + self.source = source + +# Now, some helper functions. + +def isList(l): + """Convenience method that works with all 2.x versions of Python + to determine whether or not something is listlike.""" + return ((hasattr(l, '__iter__') and not isString(l)) + or (type(l) in (types.ListType, types.TupleType))) + +def isString(s): + """Convenience method that works with all 2.x versions of Python + to determine whether or not something is stringlike.""" + try: + return isinstance(s, unicode) or isinstance(s, basestring) + except NameError: + return isinstance(s, str) + +def buildTagMap(default, *args): + """Turns a list of maps, lists, or scalars into a single map. + Used to build the SELF_CLOSING_TAGS, NESTABLE_TAGS, and + NESTING_RESET_TAGS maps out of lists and partial maps.""" + built = {} + for portion in args: + if hasattr(portion, 'items'): + #It's a map. Merge it. + for k,v in portion.items(): + built[k] = v + elif isList(portion) and not isString(portion): + #It's a list. Map each item to the default. + for k in portion: + built[k] = default + else: + #It's a scalar. Map it to the default. + built[portion] = default + return built + +# Now, the parser classes. + +class HTMLParserBuilder(HTMLParser): + + def __init__(self, soup): + HTMLParser.__init__(self) + self.soup = soup + + # We inherit feed() and reset(). + + def handle_starttag(self, name, attrs): + if name == 'meta': + self.soup.extractCharsetFromMeta(attrs) + else: + self.soup.unknown_starttag(name, attrs) + + def handle_endtag(self, name): + self.soup.unknown_endtag(name) + + def handle_data(self, content): + self.soup.handle_data(content) + + def _toStringSubclass(self, text, subclass): + """Adds a certain piece of text to the tree as a NavigableString + subclass.""" + self.soup.endData() + self.handle_data(text) + self.soup.endData(subclass) + + def handle_pi(self, text): + """Handle a processing instruction as a ProcessingInstruction + object, possibly one with a %SOUP-ENCODING% slot into which an + encoding will be plugged later.""" + if text[:3] == "xml": + text = u"xml version='1.0' encoding='%SOUP-ENCODING%'" + self._toStringSubclass(text, ProcessingInstruction) + + def handle_comment(self, text): + "Handle comments as Comment objects." + self._toStringSubclass(text, Comment) + + def handle_charref(self, ref): + "Handle character references as data." + if self.soup.convertEntities: + data = unichr(int(ref)) + else: + data = '&#%s;' % ref + self.handle_data(data) + + def handle_entityref(self, ref): + """Handle entity references as data, possibly converting known + HTML and/or XML entity references to the corresponding Unicode + characters.""" + data = None + if self.soup.convertHTMLEntities: + try: + data = unichr(name2codepoint[ref]) + except KeyError: + pass + + if not data and self.soup.convertXMLEntities: + data = self.soup.XML_ENTITIES_TO_SPECIAL_CHARS.get(ref) + + if not data and self.soup.convertHTMLEntities and \ + not self.soup.XML_ENTITIES_TO_SPECIAL_CHARS.get(ref): + # TODO: We've got a problem here. We're told this is + # an entity reference, but it's not an XML entity + # reference or an HTML entity reference. Nonetheless, + # the logical thing to do is to pass it through as an + # unrecognized entity reference. + # + # Except: when the input is "&carol;" this function + # will be called with input "carol". When the input is + # "AT&T", this function will be called with input + # "T". We have no way of knowing whether a semicolon + # was present originally, so we don't know whether + # this is an unknown entity or just a misplaced + # ampersand. + # + # The more common case is a misplaced ampersand, so I + # escape the ampersand and omit the trailing semicolon. + data = "&%s" % ref + if not data: + # This case is different from the one above, because we + # haven't already gone through a supposedly comprehensive + # mapping of entities to Unicode characters. We might not + # have gone through any mapping at all. So the chances are + # very high that this is a real entity, and not a + # misplaced ampersand. + data = "&%s;" % ref + self.handle_data(data) + + def handle_decl(self, data): + "Handle DOCTYPEs and the like as Declaration objects." + self._toStringSubclass(data, Declaration) + + def parse_declaration(self, i): + """Treat a bogus SGML declaration as raw data. Treat a CDATA + declaration as a CData object.""" + j = None + if self.rawdata[i:i+9] == '<![CDATA[': + k = self.rawdata.find(']]>', i) + if k == -1: + k = len(self.rawdata) + data = self.rawdata[i+9:k] + j = k+3 + self._toStringSubclass(data, CData) + else: + try: + j = HTMLParser.parse_declaration(self, i) + except HTMLParseError: + toHandle = self.rawdata[i:] + self.handle_data(toHandle) + j = i + len(toHandle) + return j + + +class BeautifulStoneSoup(Tag): + + """This class contains the basic parser and search code. It defines + a parser that knows nothing about tag behavior except for the + following: + + You can't close a tag without closing all the tags it encloses. + That is, "<foo><bar></foo>" actually means + "<foo><bar></bar></foo>". + + [Another possible explanation is "<foo><bar /></foo>", but since + this class defines no SELF_CLOSING_TAGS, it will never use that + explanation.] + + This class is useful for parsing XML or made-up markup languages, + or when BeautifulSoup makes an assumption counter to what you were + expecting.""" + + SELF_CLOSING_TAGS = {} + NESTABLE_TAGS = {} + RESET_NESTING_TAGS = {} + QUOTE_TAGS = {} + PRESERVE_WHITESPACE_TAGS = [] + + MARKUP_MASSAGE = [(re.compile('(<[^<>]*)/>'), + lambda x: x.group(1) + ' />'), + (re.compile('<!\s+([^<>]*)>'), + lambda x: '<!' + x.group(1) + '>') + ] + + ROOT_TAG_NAME = u'[document]' + + HTML_ENTITIES = "html" + XML_ENTITIES = "xml" + XHTML_ENTITIES = "xhtml" + # TODO: This only exists for backwards-compatibility + ALL_ENTITIES = XHTML_ENTITIES + + # Used when determining whether a text node is all whitespace and + # can be replaced with a single space. A text node that contains + # fancy Unicode spaces (usually non-breaking) should be left + # alone. + STRIP_ASCII_SPACES = { 9: None, 10: None, 12: None, 13: None, 32: None, } + + def __init__(self, markup="", parseOnlyThese=None, fromEncoding=None, + markupMassage=True, smartQuotesTo=XML_ENTITIES, + convertEntities=None, selfClosingTags=None, isHTML=False, + builder=HTMLParserBuilder): + """The Soup object is initialized as the 'root tag', and the + provided markup (which can be a string or a file-like object) + is fed into the underlying parser. + + HTMLParser will process most bad HTML, and the BeautifulSoup + class has some tricks for dealing with some HTML that kills + HTMLParser, but Beautiful Soup can nonetheless choke or lose data + if your data uses self-closing tags or declarations + incorrectly. + + By default, Beautiful Soup uses regexes to sanitize input, + avoiding the vast majority of these problems. If the problems + don't apply to you, pass in False for markupMassage, and + you'll get better performance. + + The default parser massage techniques fix the two most common + instances of invalid HTML that choke HTMLParser: + + <br/> (No space between name of closing tag and tag close) + <! --Comment--> (Extraneous whitespace in declaration) + + You can pass in a custom list of (RE object, replace method) + tuples to get Beautiful Soup to scrub your input the way you + want.""" + + self.parseOnlyThese = parseOnlyThese + self.fromEncoding = fromEncoding + self.smartQuotesTo = smartQuotesTo + self.convertEntities = convertEntities + # Set the rules for how we'll deal with the entities we + # encounter + if self.convertEntities: + # It doesn't make sense to convert encoded characters to + # entities even while you're converting entities to Unicode. + # Just convert it all to Unicode. + self.smartQuotesTo = None + if convertEntities == self.HTML_ENTITIES: + self.convertXMLEntities = False + self.convertHTMLEntities = True + self.escapeUnrecognizedEntities = True + elif convertEntities == self.XHTML_ENTITIES: + self.convertXMLEntities = True + self.convertHTMLEntities = True + self.escapeUnrecognizedEntities = False + elif convertEntities == self.XML_ENTITIES: + self.convertXMLEntities = True + self.convertHTMLEntities = False + self.escapeUnrecognizedEntities = False + else: + self.convertXMLEntities = False + self.convertHTMLEntities = False + self.escapeUnrecognizedEntities = False + + self.instanceSelfClosingTags = buildTagMap(None, selfClosingTags) + self.builder = builder(self) + self.reset() + + if hasattr(markup, 'read'): # It's a file-type object. + markup = markup.read() + self.markup = markup + self.markupMassage = markupMassage + try: + self._feed(isHTML=isHTML) + except StopParsing: + pass + self.markup = None # The markup can now be GCed. + self.builder = None # So can the builder. + + def _feed(self, inDocumentEncoding=None, isHTML=False): + # Convert the document to Unicode. + markup = self.markup + if isinstance(markup, unicode): + if not hasattr(self, 'originalEncoding'): + self.originalEncoding = None + else: + dammit = UnicodeDammit\ + (markup, [self.fromEncoding, inDocumentEncoding], + smartQuotesTo=self.smartQuotesTo, isHTML=isHTML) + markup = dammit.unicode + self.originalEncoding = dammit.originalEncoding + self.declaredHTMLEncoding = dammit.declaredHTMLEncoding + if markup: + if self.markupMassage: + if not isList(self.markupMassage): + self.markupMassage = self.MARKUP_MASSAGE + for fix, m in self.markupMassage: + markup = fix.sub(m, markup) + # TODO: We get rid of markupMassage so that the + # soup object can be deepcopied later on. Some + # Python installations can't copy regexes. If anyone + # was relying on the existence of markupMassage, this + # might cause problems. + del(self.markupMassage) + self.builder.reset() + + self.builder.feed(markup) + # Close out any unfinished strings and close all the open tags. + self.endData() + while self.currentTag.name != self.ROOT_TAG_NAME: + self.popTag() + + def isSelfClosingTag(self, name): + """Returns true iff the given string is the name of a + self-closing tag according to this parser.""" + return self.SELF_CLOSING_TAGS.has_key(name) \ + or self.instanceSelfClosingTags.has_key(name) + + def reset(self): + Tag.__init__(self, self, self.ROOT_TAG_NAME) + self.hidden = 1 + self.builder.reset() + self.currentData = [] + self.currentTag = None + self.tagStack = [] + self.quoteStack = [] + self.pushTag(self) + + def popTag(self): + tag = self.tagStack.pop() + # Tags with just one string-owning child get the child as a + # 'string' property, so that soup.tag.string is shorthand for + # soup.tag.contents[0] + if len(self.currentTag.contents) == 1 and \ + isinstance(self.currentTag.contents[0], NavigableString): + self.currentTag.string = self.currentTag.contents[0] + + #print "Pop", tag.name + if self.tagStack: + self.currentTag = self.tagStack[-1] + return self.currentTag + + def pushTag(self, tag): + #print "Push", tag.name + if self.currentTag: + self.currentTag.contents.append(tag) + self.tagStack.append(tag) + self.currentTag = self.tagStack[-1] + + def endData(self, containerClass=NavigableString): + if self.currentData: + currentData = u''.join(self.currentData) + if (currentData.translate(self.STRIP_ASCII_SPACES) == '' and + not set([tag.name for tag in self.tagStack]).intersection( + self.PRESERVE_WHITESPACE_TAGS)): + if '\n' in currentData: + currentData = '\n' + else: + currentData = ' ' + self.currentData = [] + if self.parseOnlyThese and len(self.tagStack) <= 1 and \ + (not self.parseOnlyThese.text or \ + not self.parseOnlyThese.search(currentData)): + return + o = containerClass(currentData) + o.setup(self.currentTag, self.previous) + if self.previous: + self.previous.next = o + self.previous = o + self.currentTag.contents.append(o) + + + def _popToTag(self, name, inclusivePop=True): + """Pops the tag stack up to and including the most recent + instance of the given tag. If inclusivePop is false, pops the tag + stack up to but *not* including the most recent instqance of + the given tag.""" + #print "Popping to %s" % name + if name == self.ROOT_TAG_NAME: + return + + numPops = 0 + mostRecentTag = None + for i in range(len(self.tagStack)-1, 0, -1): + if name == self.tagStack[i].name: + numPops = len(self.tagStack)-i + break + if not inclusivePop: + numPops = numPops - 1 + + for i in range(0, numPops): + mostRecentTag = self.popTag() + return mostRecentTag + + def _smartPop(self, name): + + """We need to pop up to the previous tag of this type, unless + one of this tag's nesting reset triggers comes between this + tag and the previous tag of this type, OR unless this tag is a + generic nesting trigger and another generic nesting trigger + comes between this tag and the previous tag of this type. + + Examples: + <p>Foo<b>Bar *<p>* should pop to 'p', not 'b'. + <p>Foo<table>Bar *<p>* should pop to 'table', not 'p'. + <p>Foo<table><tr>Bar *<p>* should pop to 'tr', not 'p'. + + <li><ul><li> *<li>* should pop to 'ul', not the first 'li'. + <tr><table><tr> *<tr>* should pop to 'table', not the first 'tr' + <td><tr><td> *<td>* should pop to 'tr', not the first 'td' + """ + + nestingResetTriggers = self.NESTABLE_TAGS.get(name) + isNestable = nestingResetTriggers != None + isResetNesting = self.RESET_NESTING_TAGS.has_key(name) + popTo = None + inclusive = True + for i in range(len(self.tagStack)-1, 0, -1): + p = self.tagStack[i] + if (not p or p.name == name) and not isNestable: + #Non-nestable tags get popped to the top or to their + #last occurance. + popTo = name + break + if (nestingResetTriggers != None + and p.name in nestingResetTriggers) \ + or (nestingResetTriggers == None and isResetNesting + and self.RESET_NESTING_TAGS.has_key(p.name)): + + #If we encounter one of the nesting reset triggers + #peculiar to this tag, or we encounter another tag + #that causes nesting to reset, pop up to but not + #including that tag. + popTo = p.name + inclusive = False + break + p = p.parent + if popTo: + self._popToTag(popTo, inclusive) + + def unknown_starttag(self, name, attrs, selfClosing=0): + #print "Start tag %s: %s" % (name, attrs) + if self.quoteStack: + #This is not a real tag. + #print "<%s> is not real!" % name + attrs = ''.join(map(lambda(x, y): ' %s="%s"' % (x, y), attrs)) + self.handle_data('<%s%s>' % (name, attrs)) + return + self.endData() + + if not self.isSelfClosingTag(name) and not selfClosing: + self._smartPop(name) + + if self.parseOnlyThese and len(self.tagStack) <= 1 \ + and (self.parseOnlyThese.text or not self.parseOnlyThese.searchTag(name, attrs)): + return + + tag = Tag(self, name, attrs, self.currentTag, self.previous) + if self.previous: + self.previous.next = tag + self.previous = tag + self.pushTag(tag) + if selfClosing or self.isSelfClosingTag(name): + self.popTag() + if name in self.QUOTE_TAGS: + #print "Beginning quote (%s)" % name + self.quoteStack.append(name) + self.literal = 1 + return tag + + def unknown_endtag(self, name): + #print "End tag %s" % name + if self.quoteStack and self.quoteStack[-1] != name: + #This is not a real end tag. + #print "</%s> is not real!" % name + self.handle_data('</%s>' % name) + return + self.endData() + self._popToTag(name) + if self.quoteStack and self.quoteStack[-1] == name: + self.quoteStack.pop() + self.literal = (len(self.quoteStack) > 0) + + def handle_data(self, data): + self.currentData.append(data) + + def extractCharsetFromMeta(self, attrs): + self.unknown_starttag('meta', attrs) + + +class BeautifulSoup(BeautifulStoneSoup): + + """This parser knows the following facts about HTML: + + * Some tags have no closing tag and should be interpreted as being + closed as soon as they are encountered. + + * The text inside some tags (ie. 'script') may contain tags which + are not really part of the document and which should be parsed + as text, not tags. If you want to parse the text as tags, you can + always fetch it and parse it explicitly. + + * Tag nesting rules: + + Most tags can't be nested at all. For instance, the occurance of + a <p> tag should implicitly close the previous <p> tag. + + <p>Para1<p>Para2 + should be transformed into: + <p>Para1</p><p>Para2 + + Some tags can be nested arbitrarily. For instance, the occurance + of a <blockquote> tag should _not_ implicitly close the previous + <blockquote> tag. + + Alice said: <blockquote>Bob said: <blockquote>Blah + should NOT be transformed into: + Alice said: <blockquote>Bob said: </blockquote><blockquote>Blah + + Some tags can be nested, but the nesting is reset by the + interposition of other tags. For instance, a <tr> tag should + implicitly close the previous <tr> tag within the same <table>, + but not close a <tr> tag in another table. + + <table><tr>Blah<tr>Blah + should be transformed into: + <table><tr>Blah</tr><tr>Blah + but, + <tr>Blah<table><tr>Blah + should NOT be transformed into + <tr>Blah<table></tr><tr>Blah + + Differing assumptions about tag nesting rules are a major source + of problems with the BeautifulSoup class. If BeautifulSoup is not + treating as nestable a tag your page author treats as nestable, + try ICantBelieveItsBeautifulSoup, MinimalSoup, or + BeautifulStoneSoup before writing your own subclass.""" + + def __init__(self, *args, **kwargs): + if not kwargs.has_key('smartQuotesTo'): + kwargs['smartQuotesTo'] = self.HTML_ENTITIES + kwargs['isHTML'] = True + BeautifulStoneSoup.__init__(self, *args, **kwargs) + + SELF_CLOSING_TAGS = buildTagMap(None, + ['br' , 'hr', 'input', 'img', 'meta', + 'spacer', 'link', 'frame', 'base']) + + PRESERVE_WHITESPACE_TAGS = set(['pre', 'textarea']) + + QUOTE_TAGS = {'script' : None, 'textarea' : None} + + #According to the HTML standard, each of these inline tags can + #contain another tag of the same type. Furthermore, it's common + #to actually use these tags this way. + NESTABLE_INLINE_TAGS = ['span', 'font', 'q', 'object', 'bdo', 'sub', 'sup', + 'center'] + + #According to the HTML standard, these block tags can contain + #another tag of the same type. Furthermore, it's common + #to actually use these tags this way. + NESTABLE_BLOCK_TAGS = ['blockquote', 'div', 'fieldset', 'ins', 'del'] + + #Lists can contain other lists, but there are restrictions. + NESTABLE_LIST_TAGS = { 'ol' : [], + 'ul' : [], + 'li' : ['ul', 'ol'], + 'dl' : [], + 'dd' : ['dl'], + 'dt' : ['dl'] } + + #Tables can contain other tables, but there are restrictions. + NESTABLE_TABLE_TAGS = {'table' : [], + 'tr' : ['table', 'tbody', 'tfoot', 'thead'], + 'td' : ['tr'], + 'th' : ['tr'], + 'thead' : ['table'], + 'tbody' : ['table'], + 'tfoot' : ['table'], + } + + NON_NESTABLE_BLOCK_TAGS = ['address', 'form', 'p', 'pre'] + + #If one of these tags is encountered, all tags up to the next tag of + #this type are popped. + RESET_NESTING_TAGS = buildTagMap(None, NESTABLE_BLOCK_TAGS, 'noscript', + NON_NESTABLE_BLOCK_TAGS, + NESTABLE_LIST_TAGS, + NESTABLE_TABLE_TAGS) + + NESTABLE_TAGS = buildTagMap([], NESTABLE_INLINE_TAGS, NESTABLE_BLOCK_TAGS, + NESTABLE_LIST_TAGS, NESTABLE_TABLE_TAGS) + + # Used to detect the charset in a META tag; see start_meta + CHARSET_RE = re.compile("((^|;)\s*charset=)([^;]*)", re.M) + + def extractCharsetFromMeta(self, attrs): + """Beautiful Soup can detect a charset included in a META tag, + try to convert the document to that charset, and re-parse the + document from the beginning.""" + httpEquiv = None + contentType = None + contentTypeIndex = None + tagNeedsEncodingSubstitution = False + + for i in range(0, len(attrs)): + key, value = attrs[i] + key = key.lower() + if key == 'http-equiv': + httpEquiv = value + elif key == 'content': + contentType = value + contentTypeIndex = i + + if httpEquiv and contentType: # It's an interesting meta tag. + match = self.CHARSET_RE.search(contentType) + if match: + if (self.declaredHTMLEncoding is not None or + self.originalEncoding == self.fromEncoding): + # An HTML encoding was sniffed while converting + # the document to Unicode, or an HTML encoding was + # sniffed during a previous pass through the + # document, or an encoding was specified + # explicitly and it worked. Rewrite the meta tag. + def rewrite(match): + return match.group(1) + "%SOUP-ENCODING%" + newAttr = self.CHARSET_RE.sub(rewrite, contentType) + attrs[contentTypeIndex] = (attrs[contentTypeIndex][0], + newAttr) + tagNeedsEncodingSubstitution = True + else: + # This is our first pass through the document. + # Go through it again with the encoding information. + newCharset = match.group(3) + if newCharset and newCharset != self.originalEncoding: + self.declaredHTMLEncoding = newCharset + self._feed(self.declaredHTMLEncoding) + raise StopParsing + pass + tag = self.unknown_starttag("meta", attrs) + if tag and tagNeedsEncodingSubstitution: + tag.containsSubstitutions = True + + +class StopParsing(Exception): + pass + +class ICantBelieveItsBeautifulSoup(BeautifulSoup): + + """The BeautifulSoup class is oriented towards skipping over + common HTML errors like unclosed tags. However, sometimes it makes + errors of its own. For instance, consider this fragment: + + <b>Foo<b>Bar</b></b> + + This is perfectly valid (if bizarre) HTML. However, the + BeautifulSoup class will implicitly close the first b tag when it + encounters the second 'b'. It will think the author wrote + "<b>Foo<b>Bar", and didn't close the first 'b' tag, because + there's no real-world reason to bold something that's already + bold. When it encounters '</b></b>' it will close two more 'b' + tags, for a grand total of three tags closed instead of two. This + can throw off the rest of your document structure. The same is + true of a number of other tags, listed below. + + It's much more common for someone to forget to close a 'b' tag + than to actually use nested 'b' tags, and the BeautifulSoup class + handles the common case. This class handles the not-co-common + case: where you can't believe someone wrote what they did, but + it's valid HTML and BeautifulSoup screwed up by assuming it + wouldn't be.""" + + I_CANT_BELIEVE_THEYRE_NESTABLE_INLINE_TAGS = \ + ['em', 'big', 'i', 'small', 'tt', 'abbr', 'acronym', 'strong', + 'cite', 'code', 'dfn', 'kbd', 'samp', 'strong', 'var', 'b', + 'big'] + + I_CANT_BELIEVE_THEYRE_NESTABLE_BLOCK_TAGS = ['noscript'] + + NESTABLE_TAGS = buildTagMap([], BeautifulSoup.NESTABLE_TAGS, + I_CANT_BELIEVE_THEYRE_NESTABLE_BLOCK_TAGS, + I_CANT_BELIEVE_THEYRE_NESTABLE_INLINE_TAGS) + +class MinimalSoup(BeautifulSoup): + """The MinimalSoup class is for parsing HTML that contains + pathologically bad markup. It makes no assumptions about tag + nesting, but it does know which tags are self-closing, that + <script> tags contain Javascript and should not be parsed, that + META tags may contain encoding information, and so on. + + This also makes it better for subclassing than BeautifulStoneSoup + or BeautifulSoup.""" + + RESET_NESTING_TAGS = buildTagMap('noscript') + NESTABLE_TAGS = {} + +class BeautifulSOAP(BeautifulStoneSoup): + """This class will push a tag with only a single string child into + the tag's parent as an attribute. The attribute's name is the tag + name, and the value is the string child. An example should give + the flavor of the change: + + <foo><bar>baz</bar></foo> + => + <foo bar="baz"><bar>baz</bar></foo> + + You can then access fooTag['bar'] instead of fooTag.barTag.string. + + This is, of course, useful for scraping structures that tend to + use subelements instead of attributes, such as SOAP messages. Note + that it modifies its input, so don't print the modified version + out. + + I'm not sure how many people really want to use this class; let me + know if you do. Mainly I like the name.""" + + def popTag(self): + if len(self.tagStack) > 1: + tag = self.tagStack[-1] + parent = self.tagStack[-2] + parent._getAttrMap() + if (isinstance(tag, Tag) and len(tag.contents) == 1 and + isinstance(tag.contents[0], NavigableString) and + not parent.attrMap.has_key(tag.name)): + parent[tag.name] = tag.contents[0] + BeautifulStoneSoup.popTag(self) + +#Enterprise class names! It has come to our attention that some people +#think the names of the Beautiful Soup parser classes are too silly +#and "unprofessional" for use in enterprise screen-scraping. We feel +#your pain! For such-minded folk, the Beautiful Soup Consortium And +#All-Night Kosher Bakery recommends renaming this file to +#"RobustParser.py" (or, in cases of extreme enterprisiness, +#"RobustParserBeanInterface.class") and using the following +#enterprise-friendly class aliases: +class RobustXMLParser(BeautifulStoneSoup): + pass +class RobustHTMLParser(BeautifulSoup): + pass +class RobustWackAssHTMLParser(ICantBelieveItsBeautifulSoup): + pass +class RobustInsanelyWackAssHTMLParser(MinimalSoup): + pass +class SimplifyingSOAPParser(BeautifulSOAP): + pass + +###################################################### +# +# Bonus library: Unicode, Dammit +# +# This class forces XML data into a standard format (usually to UTF-8 +# or Unicode). It is heavily based on code from Mark Pilgrim's +# Universal Feed Parser. It does not rewrite the XML or HTML to +# reflect a new encoding: that happens in BeautifulStoneSoup.handle_pi +# (XML) and BeautifulSoup.start_meta (HTML). + +# Autodetects character encodings. +# Download from http://chardet.feedparser.org/ +try: + import chardet +# import chardet.constants +# chardet.constants._debug = 1 +except ImportError: + chardet = None + +# cjkcodecs and iconv_codec make Python know about more character encodings. +# Both are available from http://cjkpython.i18n.org/ +# They're built in if you use Python 2.4. +try: + import cjkcodecs.aliases +except ImportError: + pass +try: + import iconv_codec +except ImportError: + pass + +class UnicodeDammit: + """A class for detecting the encoding of a *ML document and + converting it to a Unicode string. If the source encoding is + windows-1252, can replace MS smart quotes with their HTML or XML + equivalents.""" + + # This dictionary maps commonly seen values for "charset" in HTML + # meta tags to the corresponding Python codec names. It only covers + # values that aren't in Python's aliases and can't be determined + # by the heuristics in find_codec. + CHARSET_ALIASES = { "macintosh" : "mac-roman", + "x-sjis" : "shift-jis" } + + def __init__(self, markup, overrideEncodings=[], + smartQuotesTo='xml', isHTML=False): + self.declaredHTMLEncoding = None + self.markup, documentEncoding, sniffedEncoding = \ + self._detectEncoding(markup, isHTML) + self.smartQuotesTo = smartQuotesTo + self.triedEncodings = [] + if markup == '' or isinstance(markup, unicode): + self.originalEncoding = None + self.unicode = unicode(markup) + return + + u = None + for proposedEncoding in overrideEncodings: + u = self._convertFrom(proposedEncoding) + if u: break + if not u: + for proposedEncoding in (documentEncoding, sniffedEncoding): + u = self._convertFrom(proposedEncoding) + if u: break + + # If no luck and we have auto-detection library, try that: + if not u and chardet and not isinstance(self.markup, unicode): + u = self._convertFrom(chardet.detect(self.markup)['encoding']) + + # As a last resort, try utf-8 and windows-1252: + if not u: + for proposed_encoding in ("utf-8", "windows-1252"): + u = self._convertFrom(proposed_encoding) + if u: break + + self.unicode = u + if not u: self.originalEncoding = None + + def _subMSChar(self, match): + """Changes a MS smart quote character to an XML or HTML + entity.""" + orig = match.group(1) + sub = self.MS_CHARS.get(orig) + if type(sub) == types.TupleType: + if self.smartQuotesTo == 'xml': + sub = '&#x'.encode() + sub[1].encode() + ';'.encode() + else: + sub = '&'.encode() + sub[0].encode() + ';'.encode() + else: + sub = sub.encode() + return sub + + def _convertFrom(self, proposed): + proposed = self.find_codec(proposed) + if not proposed or proposed in self.triedEncodings: + return None + self.triedEncodings.append(proposed) + markup = self.markup + + # Convert smart quotes to HTML if coming from an encoding + # that might have them. + if self.smartQuotesTo and proposed.lower() in("windows-1252", + "iso-8859-1", + "iso-8859-2"): + smart_quotes_re = "([\x80-\x9f])" + smart_quotes_compiled = re.compile(smart_quotes_re) + markup = smart_quotes_compiled.sub(self._subMSChar, markup) + + try: + # print "Trying to convert document to %s" % proposed + u = self._toUnicode(markup, proposed) + self.markup = u + self.originalEncoding = proposed + except Exception, e: + # print "That didn't work!" + # print e + return None + #print "Correct encoding: %s" % proposed + return self.markup + + def _toUnicode(self, data, encoding): + '''Given a string and its encoding, decodes the string into Unicode. + %encoding is a string recognized by encodings.aliases''' + + # strip Byte Order Mark (if present) + if (len(data) >= 4) and (data[:2] == '\xfe\xff') \ + and (data[2:4] != '\x00\x00'): + encoding = 'utf-16be' + data = data[2:] + elif (len(data) >= 4) and (data[:2] == '\xff\xfe') \ + and (data[2:4] != '\x00\x00'): + encoding = 'utf-16le' + data = data[2:] + elif data[:3] == '\xef\xbb\xbf': + encoding = 'utf-8' + data = data[3:] + elif data[:4] == '\x00\x00\xfe\xff': + encoding = 'utf-32be' + data = data[4:] + elif data[:4] == '\xff\xfe\x00\x00': + encoding = 'utf-32le' + data = data[4:] + newdata = unicode(data, encoding) + return newdata + + def _detectEncoding(self, xml_data, isHTML=False): + """Given a document, tries to detect its XML encoding.""" + xml_encoding = sniffed_xml_encoding = None + try: + if xml_data[:4] == '\x4c\x6f\xa7\x94': + # EBCDIC + xml_data = self._ebcdic_to_ascii(xml_data) + elif xml_data[:4] == '\x00\x3c\x00\x3f': + # UTF-16BE + sniffed_xml_encoding = 'utf-16be' + xml_data = unicode(xml_data, 'utf-16be').encode('utf-8') + elif (len(xml_data) >= 4) and (xml_data[:2] == '\xfe\xff') \ + and (xml_data[2:4] != '\x00\x00'): + # UTF-16BE with BOM + sniffed_xml_encoding = 'utf-16be' + xml_data = unicode(xml_data[2:], 'utf-16be').encode('utf-8') + elif xml_data[:4] == '\x3c\x00\x3f\x00': + # UTF-16LE + sniffed_xml_encoding = 'utf-16le' + xml_data = unicode(xml_data, 'utf-16le').encode('utf-8') + elif (len(xml_data) >= 4) and (xml_data[:2] == '\xff\xfe') and \ + (xml_data[2:4] != '\x00\x00'): + # UTF-16LE with BOM + sniffed_xml_encoding = 'utf-16le' + xml_data = unicode(xml_data[2:], 'utf-16le').encode('utf-8') + elif xml_data[:4] == '\x00\x00\x00\x3c': + # UTF-32BE + sniffed_xml_encoding = 'utf-32be' + xml_data = unicode(xml_data, 'utf-32be').encode('utf-8') + elif xml_data[:4] == '\x3c\x00\x00\x00': + # UTF-32LE + sniffed_xml_encoding = 'utf-32le' + xml_data = unicode(xml_data, 'utf-32le').encode('utf-8') + elif xml_data[:4] == '\x00\x00\xfe\xff': + # UTF-32BE with BOM + sniffed_xml_encoding = 'utf-32be' + xml_data = unicode(xml_data[4:], 'utf-32be').encode('utf-8') + elif xml_data[:4] == '\xff\xfe\x00\x00': + # UTF-32LE with BOM + sniffed_xml_encoding = 'utf-32le' + xml_data = unicode(xml_data[4:], 'utf-32le').encode('utf-8') + elif xml_data[:3] == '\xef\xbb\xbf': + # UTF-8 with BOM + sniffed_xml_encoding = 'utf-8' + xml_data = unicode(xml_data[3:], 'utf-8').encode('utf-8') + else: + sniffed_xml_encoding = 'ascii' + pass + except: + xml_encoding_match = None + xml_encoding_re = '^<\?.*encoding=[\'"](.*?)[\'"].*\?>'.encode() + xml_encoding_match = re.compile(xml_encoding_re).match(xml_data) + if not xml_encoding_match and isHTML: + meta_re = '<\s*meta[^>]+charset=([^>]*?)[;\'">]'.encode() + regexp = re.compile(meta_re, re.I) + xml_encoding_match = regexp.search(xml_data) + if xml_encoding_match is not None: + xml_encoding = xml_encoding_match.groups()[0].decode( + 'ascii').lower() + if isHTML: + self.declaredHTMLEncoding = xml_encoding + if sniffed_xml_encoding and \ + (xml_encoding in ('iso-10646-ucs-2', 'ucs-2', 'csunicode', + 'iso-10646-ucs-4', 'ucs-4', 'csucs4', + 'utf-16', 'utf-32', 'utf_16', 'utf_32', + 'utf16', 'u16')): + xml_encoding = sniffed_xml_encoding + return xml_data, xml_encoding, sniffed_xml_encoding + + + def find_codec(self, charset): + return self._codec(self.CHARSET_ALIASES.get(charset, charset)) \ + or (charset and self._codec(charset.replace("-", ""))) \ + or (charset and self._codec(charset.replace("-", "_"))) \ + or charset + + def _codec(self, charset): + if not charset: return charset + codec = None + try: + codecs.lookup(charset) + codec = charset + except (LookupError, ValueError): + pass + return codec + + EBCDIC_TO_ASCII_MAP = None + def _ebcdic_to_ascii(self, s): + c = self.__class__ + if not c.EBCDIC_TO_ASCII_MAP: + emap = (0,1,2,3,156,9,134,127,151,141,142,11,12,13,14,15, + 16,17,18,19,157,133,8,135,24,25,146,143,28,29,30,31, + 128,129,130,131,132,10,23,27,136,137,138,139,140,5,6,7, + 144,145,22,147,148,149,150,4,152,153,154,155,20,21,158,26, + 32,160,161,162,163,164,165,166,167,168,91,46,60,40,43,33, + 38,169,170,171,172,173,174,175,176,177,93,36,42,41,59,94, + 45,47,178,179,180,181,182,183,184,185,124,44,37,95,62,63, + 186,187,188,189,190,191,192,193,194,96,58,35,64,39,61,34, + 195,97,98,99,100,101,102,103,104,105,196,197,198,199,200, + 201,202,106,107,108,109,110,111,112,113,114,203,204,205, + 206,207,208,209,126,115,116,117,118,119,120,121,122,210, + 211,212,213,214,215,216,217,218,219,220,221,222,223,224, + 225,226,227,228,229,230,231,123,65,66,67,68,69,70,71,72, + 73,232,233,234,235,236,237,125,74,75,76,77,78,79,80,81, + 82,238,239,240,241,242,243,92,159,83,84,85,86,87,88,89, + 90,244,245,246,247,248,249,48,49,50,51,52,53,54,55,56,57, + 250,251,252,253,254,255) + import string + c.EBCDIC_TO_ASCII_MAP = string.maketrans( \ + ''.join(map(chr, range(256))), ''.join(map(chr, emap))) + return s.translate(c.EBCDIC_TO_ASCII_MAP) + + MS_CHARS = { '\x80' : ('euro', '20AC'), + '\x81' : ' ', + '\x82' : ('sbquo', '201A'), + '\x83' : ('fnof', '192'), + '\x84' : ('bdquo', '201E'), + '\x85' : ('hellip', '2026'), + '\x86' : ('dagger', '2020'), + '\x87' : ('Dagger', '2021'), + '\x88' : ('circ', '2C6'), + '\x89' : ('permil', '2030'), + '\x8A' : ('Scaron', '160'), + '\x8B' : ('lsaquo', '2039'), + '\x8C' : ('OElig', '152'), + '\x8D' : '?', + '\x8E' : ('#x17D', '17D'), + '\x8F' : '?', + '\x90' : '?', + '\x91' : ('lsquo', '2018'), + '\x92' : ('rsquo', '2019'), + '\x93' : ('ldquo', '201C'), + '\x94' : ('rdquo', '201D'), + '\x95' : ('bull', '2022'), + '\x96' : ('ndash', '2013'), + '\x97' : ('mdash', '2014'), + '\x98' : ('tilde', '2DC'), + '\x99' : ('trade', '2122'), + '\x9a' : ('scaron', '161'), + '\x9b' : ('rsaquo', '203A'), + '\x9c' : ('oelig', '153'), + '\x9d' : '?', + '\x9e' : ('#x17E', '17E'), + '\x9f' : ('Yuml', ''),} + +####################################################################### + + +#By default, act as an HTML pretty-printer. +if __name__ == '__main__': + import sys + soup = BeautifulSoup(sys.stdin) + print soup.prettify() diff --git a/infoslicer/processing/HTML_Parser.py b/infoslicer/processing/HTML_Parser.py new file mode 100644 index 0000000..b99e754 --- /dev/null +++ b/infoslicer/processing/HTML_Parser.py @@ -0,0 +1,256 @@ +# Copyright (C) IBM Corporation 2008
+
+from BeautifulSoup import BeautifulSoup, Tag
+from NewtifulSoup import NewtifulStoneSoup as BeautifulStoneSoup
+import re
+from datetime import date
+
+class NoDocException(Exception):
+ def __init__(self, value):
+ self.parameter = value
+ def __str__(self):
+ return repr(self.parameter)
+
+"""
+Wrap Beautiful Soup HTML parser up in custom class to add some
+Media Wiki and DITA specific parsing functionality.
+"""
+class HTML_Parser:
+
+ #=======================================================================
+ # These lists are used at the pre-parsing stage
+ keep_tags = [ "html", "body", "p", "h1", "h2", "h3", "h4", "h5", "h6",\
+ "img", "table", "tr", "th", "td", "ol", "ul", "li", "sup", "sub"]
+ remove_tags_keep_content= ["div", "span", "strong", "a", "i", "b", "u", "color", "font"]
+ remove_classes_regexp = ""
+ #=======================================================================
+
+ #=======================================================================
+ # These lists are used at the parsing stage
+ root_node = "body"
+ section_separators = ["h3", "h4", "h5"]
+ reference_separators = ["h1", "h2"]
+ block_elements = ["img", "table", "ol", "ul"]
+ #=======================================================================
+
+ def __init__(self, document_to_parse, title, source_url):
+ if document_to_parse == None:
+ raise NoDocException("No content to parse - supply document to __init__")
+ self.input = BeautifulSoup(document_to_parse)
+ self.source = source_url
+ self.output_soup = BeautifulStoneSoup('<?xml version="1.0" encoding="utf-8"?><reference><title>%s</title></reference>' % title)
+ # First ID issued will be id below + 1
+ self.ids = {"reference" : 1,\
+ "section" : 1,\
+ "p" : 1,\
+ "ph" : 1\
+ }
+ self.image_list = self.tag_generator("reference", self.tag_generator("refbody"),[("id", "imagelist")])
+
+ def create_paragraph(self, text, tag="p"):
+ """
+ Creates a new paragraph containing <ph> tags, surrounded by the specified tag
+ @param text: Text to mark up
+ @param tag: Tag to surround with (defaults to "p")
+ @return: new tag
+ """
+ new_para = self.tag_generator(tag)
+ sentences = re.split(re.compile("[\.\!\?\"] "), text)
+ separators = re.findall(re.compile("[\.\!\?\"](?= )"), text)
+ for i in range(len(sentences) - 1):
+ new_para.append(self.tag_generator("ph", sentences[i] + separators[i]))
+ new_para.append(self.tag_generator("ph", sentences[-1]))
+ return new_para
+
+ def get_publisher(self):
+ """
+ Extracts publisher from source URL
+ @return: name of publisher
+ """
+ output = self.source.replace("http://", "").split("/")[0].split(".")
+ return ".".join([output[-2], output[-1]])
+
+ def image_handler(self):
+ """
+ Extracts image tags from the document
+ """
+ for img in self.input.findAll("img"):
+ too_small = False
+ image_path = img['src']
+ alt_text = ""
+ if img.has_key("width") and img.has_key("height") and int(img['width']) <= 70 and int(img['height']) <= 70:
+ too_small = True
+ if img.has_key("alt") and img['alt'] != "":
+ alt_text = img['alt']
+ else:
+ alt_text = image_path.split("/")[-1]
+ if (not too_small) and self.image_list.refbody.find(attrs={"href" : image_path}) == None:
+ self.image_list.refbody.append(self.tag_generator("image", "<alt>%s</alt>" % alt_text, [("href", image_path)]))
+ img.extract()
+
+ def make_shortdesc(self):
+ """
+ Extracts 1st paragraph from input, and makes it a 'shortdesc' tag
+ @return: new <shortdesc> tag containing contents of 1st paragraph
+ """
+ paragraphs = self.input.findAll("p")
+ for p in paragraphs:
+ contents = p.renderContents()
+ if len(contents) > 20 and (("." in contents) or ("?" in contents) or ("!" in contents)):
+ p.extract()
+ return self.create_paragraph(contents, "shortdesc")
+ return self.tag_generator("shortdesc")
+
+ def parse(self):
+ """
+ parses the document
+ @return: String of document in DITA markup
+ """
+ #remove images
+ self.image_handler()
+ # pre-parse
+ self.pre_parse()
+ #identify the containing reference tag
+ output_reference = self.output_soup.find("reference")
+ #add the short description
+ output_reference.append(self.make_shortdesc())
+ #add the <prolog> tag to hold metadata
+ output_reference.append(self.tag_generator("prolog"))
+ #add the source url
+ output_reference.prolog.append('<source href="%s" />' % self.source)
+ #add the publisher
+ output_reference.prolog.append(self.tag_generator("publisher", self.get_publisher()))
+ the_date = date.today().strftime("%Y-%m-%d")
+ #add created and modified dates
+ output_reference.prolog.append(self.tag_generator('critdates', '<created date="%s" /><revised modified="%s" />' % (the_date, the_date)))
+ #add the first refbody
+ output_reference.append(self.tag_generator("refbody"))
+ #track whether text should be inserted in a section or into the refbody
+ in_section = False
+ #set current refbody and section pointers
+ current_refbody = output_reference.refbody
+ current_section = None
+ #call specialised method (redundant in this class, used for inheritance)
+ self.specialise()
+ #find the first tag
+ tag = self.input.find(self.root_node).findChild()
+ while tag != None:
+ #set variable to avoid hammering the string conversion function
+ tag_name = tag.name
+ #for debugging:
+ #ignore the root node
+ if tag_name == self.root_node:
+ pass
+ #paragraph action:
+ elif tag_name == "p":
+ if in_section:
+ #tag contents as sentences and add to current section
+ current_section.append(self.create_paragraph(tag.renderContents()))
+ else:
+ #tag contents as sentences and add to current refbody
+ current_refbody.append(self.create_paragraph(tag.renderContents()))
+ #section separator action
+ elif tag_name in self.section_separators:
+ #create a new section tag
+ new_section = self.tag_generator("section")
+ #make a title for the tag from heading contents
+ new_section.append(self.tag_generator("title", tag.renderContents()))
+ #hold a pointer to the new section
+ current_section = new_section
+ #add the new section to the current refbody
+ current_refbody.append(new_section)
+ #currently working in a section, not a refbody
+ in_section = True
+ #reference separator action:
+ elif tag_name in self.reference_separators:
+ #no longer working in a section
+ in_section = False
+ #create a new reference tag
+ new_reference = self.tag_generator("reference")
+ #make a title for the tag from heading contents
+ new_reference.append(self.tag_generator("title", tag.renderContents()))
+ #create a refbody tag for the reference
+ new_refbody = self.tag_generator("refbody")
+ #add refbody to the reference tag
+ new_reference.append(new_refbody)
+ #remember the current refbody tag
+ current_refbody = new_refbody
+ #add the new reference to the containing reference tag in the output
+ output_reference.append(new_reference)
+ #block element action
+ elif tag_name in self.block_elements:
+ if in_section:
+ #add block element to current section
+ current_section.append(self.tag_generator(tag_name, tag.renderContents()))
+ else:
+ #add block element to new section
+ current_refbody.append(self.tag_generator("section", self.tag_generator(tag_name, tag.renderContents())))
+ #find the next tag and continue
+ tag = tag.findNextSibling()
+ #append the image list
+ self.output_soup.reference.append(self.image_list)
+ #return output as a properly indented string
+ return self.output_soup.prettify()
+
+ def pre_parse(self):
+ """
+ Prepares the input for parsing
+ """
+ for tag in self.input.findAll(True, recursive=False):
+ self.unTag(tag)
+
+ def specialise(self):
+ """
+ Allows for specialised calls when inheriting
+ """
+ pass
+
+ def tag_generator(self, tag, contents=None, attrs=[]):
+ """
+ Generates new tags for the output, adding IDs where appropriate
+ @param tag: name of new tag
+ @param contents: Optional, contents to add to tag
+ @param attrs: Optional, attributes to add to tag
+ @return: new Tag object
+ """
+ if self.ids.has_key(tag) and attrs == []:
+ self.ids[tag] += 1
+ attrs = [("id", str(self.ids[tag]))]
+ if attrs != []:
+ new_tag = Tag(self.output_soup, tag, attrs)
+ else:
+ new_tag = Tag(self.output_soup, tag)
+ if contents != None:
+ new_tag.insert(0, contents)
+ attrs = []
+ return new_tag
+
+ def unTag(self, tag):
+ """
+ recursively removes unwanted tags according to defined lists
+ @param tag: tag hierarchy to work on
+ """
+ for child in tag.findChildren(True, recursive=False):
+ self.unTag(child)
+ if (self.remove_classes_regexp != "") and (tag.has_key("class") and (re.match(self.remove_classes_regexp, tag["class"]) != None)):
+ tag.extract()
+ elif tag.name in self.keep_tags:
+ new_tag = Tag(self.input, tag.name)
+ new_tag.contents = tag.contents
+ tag.replaceWith(new_tag)
+
+ elif tag.name in self.remove_tags_keep_content:
+ children = tag.findChildren(True, recursive=False)
+ if len(children)==1:
+ tag.replaceWith(children[0])
+ elif len(children) > 1:
+ new_tag = Tag(self.input, "p")
+ for child in tag.findChildren(True, recursive=False):
+ new_tag.append(child)
+ tag.replaceWith(new_tag)
+ else:
+ tag.replaceWith(tag.renderContents())
+ else:
+ tag.extract()
+
+
diff --git a/infoslicer/processing/MediaWiki_Helper.py b/infoslicer/processing/MediaWiki_Helper.py new file mode 100644 index 0000000..a20c838 --- /dev/null +++ b/infoslicer/processing/MediaWiki_Helper.py @@ -0,0 +1,267 @@ +# Copyright (C) IBM Corporation 2008
+
+import urllib
+from xml.dom import minidom
+import logging
+
+import net
+
+logger = logging.getLogger('infoslicer')
+
+"""
+Extend urllib class to spoof user-agent
+"""
+class NewURLopener(urllib.FancyURLopener):
+ version = "Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11"
+
+class PageNotFoundError(Exception):
+ def __init__(self, value):
+ self.parameter = value
+ def __str__(self):
+ return repr(self.parameter)
+
+class NoResultsError(Exception):
+ def __init__(self, value):
+ self.parameter = value
+ def __str__(self):
+ return repr(self.parameter)
+
+"""
+Default media wikihost
+"""
+defaultWiki = "en.wikipedia.org"
+
+
+"""
+This class handles interaction with Media Wiki. Getting
+content based on a number of parameters such as URL, Title, Revision.
+"""
+class MediaWiki_Helper:
+
+ def __init__(self):
+ self.proxies = net.proxies
+
+ def resolveTitle(self, title, wiki=defaultWiki):
+ """Check if a wiki article exists using the mediawiki api. Follow redirects.
+
+ @param title: article title to resolve
+ @param wiki: optional. Defaults to default wiki
+ @return: validated article title
+ @rtype: string
+ @raise PageNotFoundError: if page not found"""
+ #replace spaces with underscores
+ title = title.replace(" ", "_")
+ #create the API request string
+ path = "http://%s/w/api.php?action=query&titles=%s&redirects&format=xml" % (wiki, title)
+ #parse the xml
+ xmldoc = minidom.parseString(self.getDoc(path))
+ #check page exists, return None if it doesn't
+ page = xmldoc.getElementsByTagName("page")
+ if (page != []):
+ if ("missing" in page[0].attributes.keys()):
+ raise PageNotFoundError("The article with title '%s' could not be found on wiki '%s'" % (title, wiki))
+ #check if there are any redirection tags defined
+ redirectList = xmldoc.getElementsByTagName("r")
+ #if the redirect list is empty, return the title
+ if redirectList == []:
+ return title
+ #if there is a redirect, recursively follow the chain
+ else:
+ return self.resolveTitle(redirectList[0].attributes["to"].value)
+
+ def resolveRevision(self, revision, wiki=defaultWiki):
+ """ get an article by revision number.
+
+ @param revision: revision number to resolve
+ @param wiki: optional. Defaults to default wiki
+ @return: revision number if valid
+ @rtype: string
+ @raise PageNotFoundError: if page not found"""
+ path = "http://%s/w/api.php?action=query&format=xml&revids=%s" % (wiki, revision)
+ if ("page" in self.getDoc(path)):
+ return revision
+ else:
+ raise PageNotFoundError("The article with revision id '%s' could not be found on wiki '%s'" % (revision, wiki))
+
+ def getArticleAsWikiTextByTitle(self, title, wiki=defaultWiki):
+ """Gets the wiki markup of an article by its title from the wiki specified.
+
+ @param title: title of article to retrieve
+ @param wiki: optional. Defaults to default wiki
+ @return: article content in wiki markup
+ @rtype: string"""
+ #resolve the article title
+ title = self.resolveTitle(title)
+ #create the API request string
+ path = "http://%s/w/api.php?action=query&prop=revisions&titles=%s&rvprop=content&format=xml" % (wiki, title)
+ #remove xml tags around article
+ return self.stripTags(getDoc(path), "rev")
+
+ def getArticleAsWikiTextByURL(self, url):
+ """Gets the wiki markup of an article by its title from the wiki specified.
+
+ @param url: url of article to retrieve
+ @param wiki: optional. Defaults to default wiki
+ @return: article content in wiki markup
+ @rtype: string"""
+ args = self.breakdownURL(url)
+ if len(args) == 3:
+ return self.getArticleAsWikiTextByRevision(args[2], args[0])
+ else:
+ return self.getArticleAsWikiTextByTitle(args[1], args[0])
+
+ def getArticleAsWikiTextByRevision(self, revision, wiki=defaultWiki):
+ """Gets the wiki markup of an article by its revision id from the wiki specified.
+
+ @param revision: revision id of article to retrieve
+ @param wiki: optional. Defaults to default wiki
+ @return: article content in wiki markup
+ @rtype: string"""
+ self.resolveRevision(revision, wiki)
+ path = "http://%s/w/api.php?action=query&prop=revisions&revids=%s&rvprop=content&format=xml" % (wiki, revision)
+ return self.stripTags(getDoc(path), "rev")
+
+ def getArticleAsHTMLByTitle(self, title, wiki=defaultWiki):
+ """Gets the HTML markup of an article by its title from the wiki specified.
+
+ @param title: title of article to retrieve
+ @param wiki: optional. Defaults to default wiki
+ @return: article content in HTML markup
+ @rtype: string"""
+ #resolve article title
+ title = self.resolveTitle(title, wiki)
+ #create the API request string
+ path = "http://%s/w/api.php?action=parse&page=%s&format=xml" % (wiki,title)
+ #remove xml tags around article and fix HTML tags and quotes
+ #return fixHTML(stripTags(getDoc(path), "text"))
+ return self.fixHTML(self.getDoc(path)), path
+
+ def getArticleAsHTMLByURL(self, url):
+ """Gets the HTML markup of an article by its title from the wiki specified.
+
+ @param url: url of article to retrieve
+ @param wiki: optional. Defaults to default wiki
+ @return: article content in HTML markup
+ @rtype: string"""
+ args = self.breakdownURL(url)
+ if len(args) == 3:
+ return self.getArticleAsHTMLByRevision(args[2], args[0])
+ else:
+ return self.getArticleAsHTMLByTitle(args[1], args[0])
+
+ def getArticleAsHTMLByRevision(self, revision, wiki=defaultWiki):
+ """Gets the HTML markup of an article by its revision id from the wiki specified.
+
+ @param revision: revision id of article to retrieve
+ @param wiki: optional. Defaults to default wiki
+ @return: article content in HTML markup
+ @rtype: string"""
+ self.resolveRevision(revision, wiki)
+ path = "http://%s/w/api.php?action=parse&oldid=%s&format=xml" % (wiki,revision)
+ #remove xml tags around article and fix HTML tags and quotes
+ return self.fixHTML(stripTags(getDoc(path), "text"))
+
+ def breakdownURL(self, url):
+ """pulls out wiki address, title and revision id from a wiki URL
+
+ @param url: url to process
+ @return: information from url
+ @rtype: list"""
+ outputlist = []
+ url = url.replace("http://", "")
+ outputlist.append(url.split("/")[0])
+ if ("title=" in url):
+ outputlist.append(url.split("title=")[-1].split("&")[0])
+ if ("oldid=" in url):
+ outputlist.append(url.split("oldid=")[-1].split("&")[0])
+ else:
+ outputlist.append(url.split("/")[-1])
+ return outputlist
+
+ def getDoc(self, path):
+ """opens a remote file by http and retrieves data
+
+ @param path: location of remote file
+ @return: page contents
+ @rtype: string"""
+ urllib._urlopener = NewURLopener()
+ logger.debug("opening " + path)
+ logger.debug("proxies: " + str(self.proxies))
+ doc = urllib.urlopen(path, proxies=self.proxies)
+ output = doc.read()
+ doc.close()
+ logger.debug("url opened successfully")
+ return output
+
+ def stripTags(self, input, tag):
+ """removes specified tag
+
+ @param input: string to work on
+ @param tag: tag to remove
+ @return: original string with specified tag removed
+ @rtype: string"""
+ return input.split("<%s>" % (tag), 1)[1].split("</%s>" % (tag), 1)[0]
+
+ def fixHTML(self, input):
+ """fixes <, > and " characters in HTML
+
+ @param input: input string to work on
+ @return: modified version of input
+ @rtype: string"""
+ return input.replace("<", "<").replace(">", ">").replace(""",'"')
+
+ def getImageURLs(self, title, wiki=defaultWiki, revision=None):
+ """returns a list of the URLs of every image on the specified page on the (optional) specified wiki
+ @deprecated: This task is now performed at the parsing stage
+ """
+ #check article title is valid, follow redirects
+ title = self.resolveTitle(title, wiki)
+ #proceed if title is valid
+ if (title != None):
+ #create the API request string
+ path = "http://%s/w/api.php?action=query&prop=images&titles=%s&format=xml" % (wiki, title)
+ xmldoc = minidom.parseString(self.getDoc(path))
+ imglist = xmldoc.getElementsByTagName("im")
+ outputlist = []
+ for i in xrange(len(imglist)):
+ #create the API request string
+ path = "http://%s/w/api.php?action=query&titles=%s&prop=imageinfo&iiprop=url&format=xml" % (wiki, imglist[i].attributes["title"].value.replace(" ","_"))
+ xmldoc2 = minidom.parseString(self.getDoc(path))
+ #append image url to output
+ outputlist.append(xmldoc2.getElementsByTagName("ii")[0].attributes["url"].value)
+ #return outputlist
+ return []
+
+ def getImages(self, title, wiki=defaultWiki):
+ """returns a list of the URLs of every image on the specified page on the (optional) specified wiki
+ @deprecated: This task is now performed at the saving stage
+ """
+ imglist = getImageURLs(title, wiki)
+ outputlist = []
+ if imglist !=[]:
+ for i in imglist:
+ outputlist.append(getDoc(i))
+ return outputlist
+
+ def searchWiki(self, search, wiki=defaultWiki):
+ """Search a wiki using the openSearch protocol.
+
+ @param search: string to search for
+ @param wiki: optional. Defaults to default wiki
+ @return: search results and description pairs
+ @rtype: list"""
+ path = "http://%s/w/api.php?action=opensearch&search=%s&format=xml" % (wiki, search)
+ output = minidom.parseString(self.getDoc(path))
+ results = []
+ for item in output.getElementsByTagName("Item"):
+ results.append((item.getElementsByTagName("Text")[0].firstChild.data, item.getElementsByTagName("Description")[0].firstChild.data))
+ return results
+
+ # TODO: make this work with new searchWiki method
+ """def getFirstSearchResult(search, wiki=defaultWiki):
+ xmldoc = minidom.parseString(searchWiki(search, wiki))
+ resultList = xmldoc.getElementsByTagName("Item")
+ if (len(resultList) > 0):
+ return stripTags(resultList[0].getElementsByTagName("Text")[0].toxml(), "Text")
+ else:
+ raise noResultsError("No results found for '%s' on wiki: %s" % (search, wiki))"""
diff --git a/infoslicer/processing/MediaWiki_Parser.py b/infoslicer/processing/MediaWiki_Parser.py new file mode 100644 index 0000000..913f03e --- /dev/null +++ b/infoslicer/processing/MediaWiki_Parser.py @@ -0,0 +1,87 @@ +# Copyright (C) IBM Corporation 2008
+
+from HTML_Parser import HTML_Parser
+import re
+import logging
+
+logger = logging.getLogger('infoslicer')
+
+class MediaWiki_Parser(HTML_Parser):
+
+ #Overwriting the regexp so that various non-data content (see also, table of contents etc.) is removed
+ remove_classes_regexp = re.compile("toc|noprint|metadata|sisterproject|boilerplate|reference(?!s)|thumb|navbox|editsection")
+
+ def __init__(self, document_to_parse, title, source_url):
+ if input == None:
+ raise NoDocException("No content to parse - supply document to __init__")
+
+ logger.debug('MediaWiki_Parser: %s' % source_url)
+
+ header, input_content = document_to_parse.split("<text>")
+
+ #find the revision id in the xml the wiki API returns
+ revid = re.findall(re.compile('\<parse.*revid\=\"(?P<rid>[0-9]*)\"'),
+ header)
+
+ input_content = input_content.split("</text>")[0]
+ #call the normal constructor
+ HTML_Parser.__init__(self, "<body>" + input_content + "</body>", title, source_url)
+ #overwrite the source variable
+ self.source = "http://" + source_url.replace("http://", "").split("/")[0] + "/w/index.php?oldid=%s" % revid[0]
+
+ def specialise(self):
+ """
+ Uses DITA_Parser class's specialise() call to find the infobox in a wiki article
+ """
+ #infobox should be first table
+ first_table = self.input.find("table")
+ #the word "infobox" should be in the class name somewhere
+ if (first_table != None and first_table.has_key("class") and (re.match(re.compile("infobox"), first_table["class"]) != None)):
+ #make a new output tag to work with
+ infobox_tag = self.tag_generator("section", attrs=[("id", "infobox")])
+ #sometimes infobox data is in an inner table
+ inner_table = first_table.table
+ #sometimes it isn't :-(
+ if inner_table == None:
+ #if there isn't an inner table, work on the outer table
+ inner_table = first_table
+ # the title _should_ be in a "colspan == 2" tag
+ inner_table_title = first_table.find(attrs={ "colspan" : "2"})
+ #don't break if title can't be found
+ if inner_table_title != None:
+ #get the title
+ inner_table_title_temp = inner_table_title.renderContents()
+ #remove the title so it isn't processed twice
+ inner_table_title.extract()
+ inner_table_title = inner_table_title_temp
+ else:
+ # if there is an inner table, the title will be in the containing table - hunt it down.
+ inner_table_title = inner_table.findParent("tr").findPreviousSibling("tr").findChild("th").renderContents()
+ #finally append the title to the tag
+ infobox_tag.append(self.tag_generator("title", inner_table_title))
+ #generate the properties list
+ properties_tag = self.tag_generator("properties")
+ infobox_tag.append(properties_tag)
+ #each property is a row in the table
+ for tr in inner_table.findAll("tr"):
+ #make sure the row isn't empty
+ if tr.findChild() != None:
+ #make a new <property> tag
+ property_tag = self.tag_generator("property")
+ #table cells are either th or td
+ table_cells = tr.findAll(re.compile("th|td"))
+ if len(table_cells) == 0:
+ pass
+ elif len(table_cells) == 1:
+ #if there's only one cell on the row, make it a value
+ property_tag.append(self.tag_generator("propvalue", table_cells[0].renderContents()))
+ else:
+ #if there are two cells on the row, the first is the property type, the second is the value
+ property_tag.append(self.tag_generator("proptype", table_cells[0].renderContents().replace(":", "")))
+ property_tag.append(self.tag_generator("propvalue", table_cells[1].renderContents()))
+ #add the property to the <properties> tag
+ properties_tag.append(property_tag)
+ #add the infobox to the output
+ self.output_soup.refbody.append(infobox_tag)
+ #remove the first table to avoid parsing twice
+ first_table.extract()
diff --git a/infoslicer/processing/NewtifulSoup.py b/infoslicer/processing/NewtifulSoup.py new file mode 100644 index 0000000..4e26a12 --- /dev/null +++ b/infoslicer/processing/NewtifulSoup.py @@ -0,0 +1,9 @@ +# Copyright (C) IBM Corporation 2008
+
+from BeautifulSoup import BeautifulStoneSoup
+
+#Extend beautiful soup HTML parsing library
+#to recognise new self-closing tag <reference>
+class NewtifulStoneSoup(BeautifulStoneSoup):
+ NESTABLE_TAGS = BeautifulStoneSoup.NESTABLE_TAGS
+ NESTABLE_TAGS['reference'] = 'reference'
\ No newline at end of file diff --git a/infoslicer/processing/Paragraph.py b/infoslicer/processing/Paragraph.py new file mode 100644 index 0000000..7c743c7 --- /dev/null +++ b/infoslicer/processing/Paragraph.py @@ -0,0 +1,258 @@ +# Copyright (C) IBM Corporation 2008
+
+from Sentence import *
+import logging
+
+logger = logging.getLogger('infoslicer')
+
+"""
+Created by Jonathan Mace
+
+The classes here each correspond to a sentence in the given text buffer.
+
+You should not instantiate these classes directly.
+
+Use the "level above" class or the Article class to apply changes to the textbuffer
+or structure of the article.
+
+"""
+
+"""
+a Paragraph instance contains a list of sentences. It has methods for inserting,
+deleting and rearranging sentences within itself, as well as other housekeeping
+functions.
+
+"""
+
+class RawParagraph:
+
+ def __init__(self, id, source_article_id, source_section_id, source_paragraph_id, sentences, buf):
+ self.id = id
+ self.source_article_id = source_article_id
+ self.source_section_id = source_section_id
+ self.source_paragraph_id = source_paragraph_id
+ self.sentences = sentences
+ self.buf = buf
+
+ def insertSentence(self, sentence_data, lociter):
+ insertionindex = self.__get_best_sentence(lociter)
+ insertioniter = self.sentences[insertionindex].getStart()
+ if sentence_data.type == "sentence":
+ sentence = Sentence(sentence_data, self.buf, insertioniter)
+ elif sentence_data.type == "picture":
+ sentence = Picture(sentence_data, self.buf, insertioniter)
+ else:
+ logger.debug("WARNING, WEIRD SENTENCES: %s" % (sentence_data.type))
+ self.sentences.insert(insertionindex, sentence)
+
+ def deleteSentence(self, lociter):
+ deletionindex = self.__get_exact_sentence(lociter)
+ if deletionindex != len(self.sentences) - 1:
+ sentence = self.sentences[deletionindex]
+ sentence.delete()
+ del self.sentences[deletionindex]
+ if len(self.sentences) == 1:
+ return True
+ else:
+ return False
+
+ def removeSentence(self, lociter):
+ removalindex = self.__get_exact_sentence(lociter)
+ if removalindex != len(self.sentences) - 1:
+ sentence = self.sentences[removalindex]
+ sentence.remove()
+ del self.sentences[removalindex]
+ if len(self.sentences) == 1:
+ return True
+ else:
+ return False
+
+ def delete(self):
+ for sentence in self.sentences:
+ sentence.delete()
+
+ def deleteSelection(self, startiter, enditer):
+ startindex = self.__get_exact_sentence(startiter)
+ endindex = self.__get_exact_sentence(enditer)
+ for i in range(startindex, endindex):
+ self.sentences[startindex].delete()
+ del self.sentences[startindex]
+ if len(self.sentences) == 1:
+ return True
+ else:
+ return False
+
+ def remove(self):
+ for sentence in self.sentences:
+ sentence.remove()
+
+ def getSentence(self, lociter):
+ sentenceindex = self.__get_exact_sentence(lociter)
+ return self.sentences[sentenceindex]
+
+ def getBestSentence(self, lociter):
+ sentenceindex = self.__get_best_sentence(lociter)
+ if sentenceindex == len(self.sentences):
+ return self.sentences[-1]
+ else:
+ return self.sentences[sentenceindex]
+
+ def getStart(self):
+ return self.sentences[0].getStart()
+
+
+ def getEnd(self):
+ return self.sentences[-1].getEnd()
+
+ def __get_best_sentence(self, lociter):
+ sentenceindex = self.__get_exact_sentence(lociter)
+ sentence = self.sentences[sentenceindex]
+ left = sentence.getStart().get_offset()
+ middle = lociter.get_offset()
+ right = sentence.getEnd().get_offset()
+ leftdist = middle - left
+ rightdist = right - middle
+
+ if (sentenceindex < len(self.sentences)) and (leftdist > rightdist):
+ sentenceindex = sentenceindex +1
+ return sentenceindex
+
+
+ def __get_exact_sentence(self, lociter):
+ i = 0
+ for i in range(len(self.sentences)-1):
+ start = self.sentences[i+1].getStart()
+ if lociter.compare(start) == -1:
+ return i
+ return len(self.sentences) - 1
+
+ def getId(self):
+ return self.id
+
+ def getData(self):
+ id = self.id
+ source_article_id = self.source_article_id
+ source_section_id = self.source_section_id
+ source_paragraph_id = self.source_paragraph_id
+ sentences_data = []
+ for sentence in self.sentences[0:len(self.sentences)-1]:
+ sentences_data.append(sentence.getData())
+
+ data = Paragraph_Data(id, source_article_id, source_section_id, source_paragraph_id, sentences_data)
+ return data
+
+ def getDataRange(self, startiter, enditer):
+ startindex = self.__get_exact_sentence(startiter)
+ endindex = self.__get_exact_sentence(enditer)
+ sentences_data = []
+ for sentence in self.sentences[startindex:endindex]:
+ sentences_data.append(sentence.getData())
+ return sentences_data
+
+ def mark(self):
+ markiter = self.getStart()
+ self.markmark = self.buf.create_mark(None, markiter, True)
+ arrow = gtk.gdk.pixbuf_new_from_xpm_data(arrow_xpm)
+ self.buf.insert_pixbuf(markiter, arrow)
+
+ def unmark(self):
+ markiter = self.buf.get_iter_at_mark(self.markmark)
+ markenditer = self.buf.get_iter_at_offset(markiter.get_offset()+1)
+ self.buf.delete(markiter, markenditer)
+ self.buf.delete_mark(self.markmark)
+
+ def getSentences(self):
+ return self.sentences
+
+ def getText(self):
+ return self.buf.get_slice(self.getStart(), self.getEnd())
+
+ def clean(self):
+ if len(self.sentences) > 1:
+ sentence = self.sentences[-2]
+ if sentence.getId() == -1:
+ sentence.delete()
+ del self.sentences[-2]
+ if len(self.sentences) == 1:
+ return True
+ else:
+ return False
+ return True
+
+ def checkIntegrity(self, nextiter):
+
+ i = 0
+ sentences = []
+ while i < len(self.sentences) - 1:
+ sentence = self.sentences[i]
+ nextsentence = self.sentences[i+1]
+
+ if sentence.getStart().compare(nextsentence.getStart()) == -1:
+ sentences.extend(sentence.checkIntegrity(nextsentence.getStart()))
+ else:
+ sentence.remove()
+ del self.sentences[i]
+ i = i - 1
+
+ i = i + 1
+
+ sentence = self.sentences[-1]
+ if sentence.getStart().compare(nextiter) == -1:
+ sentences.extend(sentence.checkIntegrity(nextiter))
+
+ paragraphs = []
+ paragraphstart = 0
+ for i in range(len(sentences)-1):
+ if sentences[i].getText() == "\n":
+ paragraphs.append(RawParagraph(self.id, self.source_article_id, self.source_section_id, self.source_paragraph_id, sentences[paragraphstart:i+1], self.buf))
+ paragraphstart = i + 1
+ paragraphs.append(RawParagraph(self.id, self.source_article_id, self.source_section_id, self.source_paragraph_id, sentences[paragraphstart:len(sentences)], self.buf))
+
+ return paragraphs
+
+ def generateIds(self):
+ if self.id == None or self.id == -1:
+ self.id = random.randint(100, 100000)
+ for sentence in self.sentences[0:len(self.sentences)-1]:
+ sentence.generateIds()
+ self.sentences[-1].id = -1
+
+class Paragraph( RawParagraph ):
+
+ def __init__(self, paragraph_data, buf, insertioniter):
+ id = paragraph_data.id
+ source_article_id = paragraph_data.source_article_id
+ source_section_id = paragraph_data.source_section_id
+ source_paragraph_id = paragraph_data.source_paragraph_id
+
+ sentences = []
+
+ insertionmark = buf.create_mark(None, insertioniter, False)
+ for sentence_data in paragraph_data.sentences_data:
+ insertioniter = buf.get_iter_at_mark(insertionmark)
+ if sentence_data.type == "sentence":
+ sentence = Sentence(sentence_data, buf, insertioniter)
+ elif sentence_data.type == "picture":
+ sentence = Picture(sentence_data, buf, insertioniter)
+ else:
+ logger.debug("WARNING, WEIRD SENTENCES: %s" %
+ (sentence_data.type))
+ sentences.append(sentence)
+
+ insertioniter = buf.get_iter_at_mark(insertionmark)
+ endsentencedata = Sentence_Data(id = -1, text = "\n")
+ sentences.append(Sentence(endsentencedata, buf, insertioniter))
+
+ buf.delete_mark(insertionmark)
+
+ RawParagraph.__init__(self, id, source_article_id, source_section_id, source_paragraph_id, sentences, buf)
+
+
+class dummyParagraph( Paragraph ):
+ def __init__(self, buf, insertioniter, leftgravity):
+ self.id = -1
+ self.source_article_id = -1
+ self.source_section_id = -1
+ self.source_paragraph_id = -1
+ self.buf = buf
+ self.sentences = [ dummySentence(buf, insertioniter, leftgravity) ]
diff --git a/infoslicer/processing/Section.py b/infoslicer/processing/Section.py new file mode 100644 index 0000000..30e3dad --- /dev/null +++ b/infoslicer/processing/Section.py @@ -0,0 +1,327 @@ +# Copyright (C) IBM Corporation 2008
+
+from Paragraph import *
+import logging
+
+logger = logging.getLogger('infoslicer')
+
+"""
+Created by Jonathan Mace
+
+The classes here each correspond to a sentence in the given text buffer.
+
+You should not instantiate these classes directly.
+
+Use the "level above" class or the Article class to apply changes to the textbuffer
+or structure of the article.
+
+"""
+
+"""
+a Section instance contains a list of paragraphs. It has methods for inserting,
+deleting and rearranging paragraphs within itself, as well as other housekeeping
+functions.
+
+"""
+
+class RawSection:
+
+ def __init__(self, id, source_article_id, source_section_id, paragraphs, buf):
+ self.id = id
+ self.source_article_id = source_article_id
+ self.source_section_id = source_section_id
+ self.paragraphs = paragraphs
+ self.buf = buf
+
+ def insertParagraph(self, paragraph_data, lociter):
+ insertionindex = self.__get_best_paragraph(lociter)
+ insertioniter = self.paragraphs[insertionindex].getStart()
+ paragraph = Paragraph(paragraph_data, self.buf, insertioniter)
+ self.paragraphs.insert(insertionindex, paragraph)
+
+ def deleteParagraph(self, lociter):
+ deletionindex = self.__get_exact_paragraph(lociter)
+ if deletionindex != len(self.paragraphs) - 1:
+ paragraph = self.paragraphs[deletionindex]
+ paragraph.delete()
+ del self.paragraphs[deletionindex]
+ if len(self.paragraphs) == 1:
+ return True
+ else:
+ return False
+
+ def removeParagraph(self, lociter):
+ removalindex = self.__get_exact_paragraph(lociter)
+ if removalindex != len(self.paragraphs) - 1:
+ paragraph = self.paragraphs[removalindex]
+ paragraph.delete()
+ del self.paragraphs[removalindex]
+ if len(self.paragraphs) == 1:
+ return True
+ else:
+ return False
+
+ def splitParagraph(self, lociter):
+ paragraphindex = self.__get_exact_paragraph(lociter)
+ paragraph = self.paragraphs[paragraphindex]
+ source_article_id = paragraph.source_article_id
+ source_section_id = paragraph.source_section_id
+ source_paragraph_id = paragraph.source_paragraph_id
+ firstdata = paragraph.getDataRange(paragraph.getStart(), lociter)
+ seconddata = paragraph.getDataRange(lociter, paragraph.getEnd())
+ mark = self.buf.create_mark(None, lociter, False)
+ if firstdata != [] and seconddata != []:
+ self.deleteParagraph(lociter)
+
+ insertioniter = self.buf.get_iter_at_mark(mark)
+ paragraphdata = Paragraph_Data(None, source_article_id, source_section_id, source_paragraph_id, firstdata)
+ paragraph = Paragraph(paragraphdata, self.buf, insertioniter)
+ self.paragraphs.insert(paragraphindex, paragraph)
+
+ insertioniter = self.buf.get_iter_at_mark(mark)
+ paragraphdata = Paragraph_Data(None, source_article_id, source_section_id, source_paragraph_id, seconddata)
+ paragraph = Paragraph(paragraphdata, self.buf, insertioniter)
+ self.paragraphs.insert(paragraphindex+1, paragraph)
+
+
+
+
+ def delete(self):
+ for paragraph in self.paragraphs:
+ paragraph.delete()
+
+ def remove(self):
+ for paragraph in self.paragraphs:
+ paragraph.remove()
+
+ def deleteSelection(self, startiter, enditer):
+ startindex = self.__get_exact_paragraph(startiter)
+ endindex = self.__get_exact_paragraph(enditer)
+ if endindex == len(self.paragraphs)-1:
+ endindex = endindex - 1
+ if startindex == endindex:
+ empty = self.paragraphs[startindex].deleteSelection(startiter, enditer)
+ if empty:
+ self.paragraphs[startindex].delete()
+ del self.paragraphs[startindex]
+ elif startindex < endindex:
+ startmark = self.buf.create_mark(None, startiter, True)
+ endmark = self.buf.create_mark(None, enditer, True)
+
+ endparagraph = self.paragraphs[endindex]
+ empty = endparagraph.deleteSelection(endparagraph.getStart(), self.buf.get_iter_at_mark(endmark))
+ if empty:
+ self.paragraphs[endindex].delete()
+ del self.paragraphs[endindex]
+ self.buf.delete_mark(endmark)
+
+ for i in range(startindex+1, endindex):
+ self.paragraphs[startindex+1].delete()
+ del self.paragraphs[startindex+1]
+
+ startparagraph = self.paragraphs[startindex]
+ empty = startparagraph.deleteSelection(self.buf.get_iter_at_mark(startmark), startparagraph.getEnd())
+ if empty:
+ self.paragraphs[startindex].delete()
+ del self.paragraphs[startindex]
+ self.buf.delete_mark(startmark)
+ if len(self.paragraphs) == 1:
+ return True
+ else:
+ return False
+
+
+ def getParagraph(self, lociter):
+ paragraphindex = self.__get_exact_paragraph(lociter)
+ return self.paragraphs[paragraphindex]
+
+ def getBestParagraph(self, lociter):
+ paragraphindex = self.__get_best_paragraph(lociter)
+ if paragraphindex == len(self.paragraphs):
+ return self.paragraphs[-1]
+ else:
+ return self.paragraphs[paragraphindex]
+
+ def getStart(self):
+ return self.paragraphs[0].getStart()
+
+ def getEnd(self):
+ return self.paragraphs[-1].getEnd()
+
+ def __get_best_paragraph(self, lociter):
+ paragraphindex = self.__get_exact_paragraph(lociter)
+ paragraph = self.paragraphs[paragraphindex]
+ left = paragraph.getStart().get_offset()
+ middle = lociter.get_offset()
+ right = paragraph.getEnd().get_offset()
+ leftdist = middle - left
+ rightdist = right - middle
+
+ if (paragraphindex < len(self.paragraphs)) and (leftdist > rightdist):
+ paragraphindex = paragraphindex +1
+ return paragraphindex
+
+ def __get_exact_paragraph(self, lociter):
+ i = 0
+ for i in range(len(self.paragraphs)-1):
+ start = self.paragraphs[i+1].getStart()
+ if lociter.compare(start) == -1:
+ return i
+ return len(self.paragraphs)-1
+
+ def getId(self):
+ return self.id
+
+ def getData(self):
+ id = self.id
+ source_article_id = self.source_article_id
+ source_section_id = self.source_section_id
+ paragraphs_data = []
+ for paragraph in self.paragraphs[0:len(self.paragraphs)-1]:
+ paragraphs_data.append(paragraph.getData())
+
+ data = Section_Data(id, source_article_id, source_section_id, paragraphs_data)
+ return data
+
+ def getDataRange(self, startiter, enditer):
+ startindex = self.__get_exact_paragraph(startiter)
+ endindex = self.__get_exact_paragraph(enditer)
+ if startindex == endindex:
+ return self.paragraphs[startindex].getDataRange(startiter, enditer)
+ else:
+ startdata = []
+ startparagraph = self.paragraphs[startindex]
+ if startiter.compare(startparagraph.getStart()) == 0:
+ startdata.append(self.paragraphs[startindex].getData())
+ else:
+ startdata.extend(startparagraph.getDataRange(startiter, startparagraph.getEnd()))
+ dummydata = Sentence_Data(id = -1, text = "")
+ startdata.append(dummydata)
+
+ middledata = []
+ for paragraph in self.paragraphs[startindex+1:endindex]:
+ middledata.append(paragraph.getData())
+
+ enddata = []
+ if endindex != len(self.paragraphs):
+ endparagraph = self.paragraphs[endindex]
+ enddata.extend(endparagraph.getDataRange(endparagraph.getStart(), enditer))
+
+ data = startdata + middledata + enddata
+
+ return data
+
+ def mark(self):
+ markiter = self.getStart()
+ self.markmark = self.buf.create_mark(None, markiter, True)
+ arrow = gtk.gdk.pixbuf_new_from_xpm_data(arrow_xpm)
+ self.buf.insert_pixbuf(markiter, arrow)
+
+ def unmark(self):
+ markiter = self.buf.get_iter_at_mark(self.markmark)
+ markenditer = self.buf.get_iter_at_offset(markiter.get_offset()+1)
+ self.buf.delete(markiter, markenditer)
+ self.buf.delete_mark(self.markmark)
+
+ def getParagraphs(self):
+ return self.paragraphs
+
+ def pad(self):
+ # Pad adds a dummy paragraph containing the sentence " ", to this section
+ insertioniter = self.paragraphs[-1].getStart()
+ dummydata = Sentence_Data(id = -1, text = " ")
+ dummyparagraphdata = Paragraph_Data(id = -1, sentences_data = [dummydata])
+ paragraph = Paragraph(dummyparagraphdata, self.buf, insertioniter)
+ self.paragraphs.insert(-1, paragraph)
+
+ def clean(self):
+ # Removes the effects of pad.
+ # Returns true if, after removing the pad, the section has no meaningful content and can therefore be destroyed
+ if len(self.paragraphs) > 1:
+ paragraph = self.paragraphs[-2]
+ paragraphisempty = paragraph.clean()
+ if paragraphisempty:
+ del self.paragraphs[-2]
+ if len(self.paragraphs) == 1:
+ return True
+ else:
+ return False
+ else:
+ return True
+
+ def getText(self):
+ return self.buf.get_slice(self.getStart(), self.getEnd())
+
+ def checkIntegrity(self, nextiter):
+ i = 0
+ paragraphs = []
+ while i < len(self.paragraphs) - 1:
+ paragraph = self.paragraphs[i]
+ nextparagraph = self.paragraphs[i+1]
+
+ if paragraph.getStart().compare(nextparagraph.getStart()) == -1:
+ text = self.buf.get_slice(paragraph.getStart(), nextparagraph.getStart())
+ if len(text) > 0 and text[-1] != "\n":
+ logger.debug("concatenating paragraphs")
+ nextparagraph.sentences = paragraph.sentences + nextparagraph.sentences
+ else:
+ paragraphs.extend(paragraph.checkIntegrity(nextparagraph.getStart()))
+ else:
+ paragraph.remove()
+ del self.paragraphs[i]
+ i = i - 1
+
+ i = i + 1
+
+ paragraph = self.paragraphs[-1]
+
+ if paragraph.getStart().compare(nextiter) == -1:
+ paragraphs.extend(paragraph.checkIntegrity(nextiter))
+
+ sections = []
+ paragraphstart = 0
+ for i in range(len(paragraphs)-1):
+ if paragraphs[i].getText() == "\n":
+ sections.append(RawSection(self.id, self.source_article_id, self.source_section_id, paragraphs[paragraphstart:i+1], self.buf))
+ paragraphstart = i + 1
+ sections.append(RawSection(self.id, self.source_article_id, self.source_section_id, paragraphs[paragraphstart:len(paragraphs)], self.buf))
+
+ return sections
+
+ def generateIds(self):
+ if self.id == None or self.id == -1:
+ self.id = random.randint(100, 100000)
+ for paragraph in self.paragraphs[0:len(self.paragraphs)]:
+ paragraph.generateIds()
+ self.paragraphs[-1].id = -1
+
+class Section( RawSection ):
+
+ def __init__(self, section_data, buf, insertioniter):
+ id = section_data.id
+ source_article_id = section_data.source_article_id
+ source_section_id = section_data.source_section_id
+
+ paragraphs = []
+ insertionmark = buf.create_mark(None, insertioniter, False)
+
+ for paragraph_data in section_data.paragraphs_data:
+ insertioniter = buf.get_iter_at_mark(insertionmark)
+ paragraphs.append(Paragraph(paragraph_data, buf, insertioniter))
+
+ insertioniter = buf.get_iter_at_mark(insertionmark)
+ endparagraphdata = Paragraph_Data(id = 1, sentences_data = [])
+ paragraphs.append(Paragraph(endparagraphdata, buf, insertioniter))
+
+ buf.delete_mark(insertionmark)
+
+ RawSection.__init__(self, id, source_article_id, source_section_id, paragraphs, buf)
+
+class dummySection(Section):
+ def __init__(self, buf, insertioniter, leftgravity):
+ self.id = -1
+ self.source_article_id = -1
+ self.source_section_id = -1
+ self.buf = buf
+ self.paragraphs = [ dummyParagraph(buf, insertioniter, leftgravity) ]
+
diff --git a/infoslicer/processing/Sentence.py b/infoslicer/processing/Sentence.py new file mode 100644 index 0000000..09c31f4 --- /dev/null +++ b/infoslicer/processing/Sentence.py @@ -0,0 +1,206 @@ +# Copyright (C) IBM Corporation 2008
+
+import pygtk
+pygtk.require('2.0')
+import os
+import gtk
+import logging
+
+from Article_Data import *
+
+"""
+Created by Jonathan Mace
+
+The classes here each correspond to a sentence in the given text buffer.
+
+You should not instantiate these classes directly.
+
+Use the "level above" class or the Article class to apply changes to the textbuffer
+or structure of the article.
+
+"""
+
+"""
+A sentence keeps textmarks corresponding to the start and end of the sentence in the buffer.
+
+It has methods for restructuring itself in the event that the textbuffer changes
+from an action not controlled by the Article object it is contained in.
+
+"""
+
+logger = logging.getLogger('infoslicer')
+
+class RawSentence:
+
+ def __init__(self, id, source_article_id, source_section_id, source_paragraph_id, source_sentence_id, buf, formatting, leftmark, rightmark):
+ self.id = id
+ self.source_article_id = source_article_id
+ self.source_section_id = source_section_id
+ self.source_paragraph_id = source_paragraph_id
+ self.source_sentence_id = source_sentence_id
+ self.buf = buf
+ self.formatting = formatting
+ self.leftmark = leftmark
+ self.rightmark = rightmark
+ self.type = "sentence"
+
+ def generateIds(self):
+ if self.id == None or self.id == -1:
+ self.id = random.randint(100, 100000)
+
+ def delete(self):
+ b = self.buf
+ l = b.get_iter_at_mark(self.leftmark)
+ r = b.get_iter_at_mark(self.rightmark)
+ b.delete(l, r)
+ b.delete_mark(self.leftmark)
+ b.delete_mark(self.rightmark)
+
+ def remove(self):
+ b = self.buf
+ b.delete_mark(self.leftmark)
+ b.delete_mark(self.rightmark)
+
+ def getStart(self):
+ return self.buf.get_iter_at_mark(self.leftmark)
+
+ def getEnd(self):
+ return self.buf.get_iter_at_mark(self.rightmark)
+
+ def getId(self):
+ return self.id
+
+ def getData(self):
+ id = self.id
+ source_article_id = self.source_article_id
+ source_section_id = self.source_section_id
+ source_paragraph_id = self.source_paragraph_id
+ source_sentence_id = self.source_sentence_id
+ text = self.getText()
+ formatting = self.formatting
+
+ data = Sentence_Data(id, source_article_id, source_section_id, source_paragraph_id, source_sentence_id, text, formatting)
+ return data
+
+ def getText(self):
+ return self.buf.get_slice(self.getStart(), self.getEnd())
+
+ def checkIntegrity(self, nextiter):
+ text = unicode(self.buf.get_slice(self.getStart(), nextiter))
+ lines = text.splitlines(True)
+ sentencestartoffset = self.getStart().get_offset()
+ sentences = []
+ if text == "":
+ return []
+ else:
+ for line in lines:
+ if line == "":
+ pass
+ elif line == "\n":
+ startmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset), False)
+ endmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset + 1), True)
+ sentences.append(RawSentence(self.id, self.source_article_id, self.source_section_id, self.source_paragraph_id, self.source_sentence_id, self.buf, self.formatting, startmark, endmark))
+ sentencestartoffset = sentencestartoffset + 1
+ elif line[-1] == "\n":
+ startmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset), False)
+ endmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset + len(line)-1), True)
+ sentences.append(RawSentence(self.id, self.source_article_id, self.source_section_id, self.source_paragraph_id, self.source_sentence_id, self.buf, self.formatting, startmark, endmark))
+ sentencestartoffset = sentencestartoffset + len(line)-1
+ startmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset), False)
+ endmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset + 1), True)
+ sentences.append(RawSentence(self.id, self.source_article_id, self.source_section_id, self.source_paragraph_id, self.source_sentence_id, self.buf, self.formatting, startmark, endmark))
+ sentencestartoffset = sentencestartoffset + 1
+ else:
+ startmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset), False)
+ endmark = self.buf.create_mark(None, self.buf.get_iter_at_offset(sentencestartoffset + len(line)), True)
+ sentences.append(RawSentence(self.id, self.source_article_id, self.source_section_id, self.source_paragraph_id, self.source_sentence_id, self.buf, self.formatting, startmark, endmark))
+
+ return sentences
+
+class Sentence( RawSentence ):
+
+ def __init__(self, sentence_data, buf, insertioniter):
+
+ id = sentence_data.id
+ source_article_id = sentence_data.source_article_id
+ source_section_id = sentence_data.source_section_id
+ source_paragraph_id = sentence_data.source_paragraph_id
+ source_sentence_id = sentence_data.source_sentence_id
+
+ """
+ Here, apply formatting changes when necessary.
+ Yet to be implemented. """
+ formatting = sentence_data.formatting
+
+ rightmark = buf.create_mark(None, insertioniter, True)
+ leftmark = buf.create_mark(None, insertioniter, False)
+ buf.insert(insertioniter, unicode(sentence_data.text))
+ left = buf.get_iter_at_mark(rightmark)
+ right = buf.get_iter_at_mark(leftmark)
+ buf.move_mark(leftmark, left)
+ buf.move_mark(rightmark, right)
+
+ RawSentence.__init__(self, id, source_article_id, source_section_id, source_paragraph_id, source_sentence_id, buf, formatting, leftmark, rightmark)
+
+class Picture( RawSentence ):
+
+ def __init__(self, picture_data, buf, insertioniter):
+ id = 0
+ source_article_id = picture_data.source_article_id
+ source_section_id = 0
+ source_paragraph_id = 0
+ source_sentence_id = 0
+ formatting = []
+
+ self.text = picture_data.text
+ self.orig = picture_data.orig
+
+ rightmark = buf.create_mark(None, insertioniter, True)
+ leftmark = buf.create_mark(None, insertioniter, False)
+
+ if os.path.isfile(picture_data.text):
+ pixbuf = gtk.gdk.pixbuf_new_from_file(picture_data.text)
+ buf.insert_pixbuf(insertioniter, pixbuf)
+ else:
+ logger.warning('cannot open image %s' % picture_data.text)
+
+ left = buf.get_iter_at_mark(rightmark)
+ right = buf.get_iter_at_mark(leftmark)
+ buf.move_mark(leftmark, left)
+ buf.move_mark(rightmark, right)
+
+ RawSentence.__init__(self, id, source_article_id, source_section_id, source_paragraph_id, source_sentence_id, buf, formatting, leftmark, rightmark)
+ self.type = "picture"
+
+ def getData(self):
+ return Picture_Data(self.source_article_id, self.text, self.orig)
+
+ def checkIntegrity(self, nextiter):
+ sentences = []
+ if self.getEnd().compare(nextiter) == 0:
+ return [self]
+ elif self.getStart().compare(self.getEnd()) > 0:
+ sentences.append(self)
+ if self.getEnd().compare(nextiter) > 0:
+ startmark = self.buf.create_mark(None, self.getEnd(), False)
+ endmark = self.buf.create_mark(None, nextiter, True)
+ nextsentence = RawSentence(self.source_article_id, 1, 1, 1, self.buf, [], startmark, endmark)
+ nextsentences = nextsentence.checkIntegrity(nextiter)
+ sentences.extend(nextsentences)
+ return sentences
+
+
+class dummySentence( Sentence ):
+ def __init__(self, buf, insertioniter, leftgravity):
+ self.id = -1
+ self.source_article_id = -1
+ self.source_section_id = -1
+ self.source_paragraph_id = -1
+ self.source_sentence_id = -1
+ self.text = ""
+ self.formatting = []
+ self.buf = buf
+ self.leftmark = self.buf.create_mark(None, insertioniter, leftgravity)
+ self.rightmark = self.buf.create_mark(None, insertioniter, leftgravity)
+ self.type = "dummysentence"
+
diff --git a/infoslicer/processing/__init__.py b/infoslicer/processing/__init__.py new file mode 100644 index 0000000..1bc63d4 --- /dev/null +++ b/infoslicer/processing/__init__.py @@ -0,0 +1,3 @@ +# Copyright (C) IBM Corporation 2008 + +# This file should exist, despite having no code in it diff --git a/infoslicer/widgets/Edit_Pane.py b/infoslicer/widgets/Edit_Pane.py new file mode 100644 index 0000000..beff0f4 --- /dev/null +++ b/infoslicer/widgets/Edit_Pane.py @@ -0,0 +1,106 @@ +# Copyright (C) IBM Corporation 2008 +import pygtk +pygtk.require('2.0') +import gtk +import logging +from gettext import gettext as _ + +from sugar.graphics.toolcombobox import ToolComboBox + +from Reading_View import Reading_View +from Editing_View import Editing_View +from infoslicer.processing.Article import Article + +logger = logging.getLogger('infoslicer') + +class Edit_Pane(gtk.HBox): + """ + Created by Jonathan Mace + + See __init__.py for overview of panes. + + The Edit Pane gives a side-by-side view of the source article and edit article + and allows users to drag text selections from the left hand (source) to the right + hand side (edited version). + + The article displayed in the left hand side (source) can be changed by the + drop-down menu (implemented in Compound_Widgets.Reading_View) + + The toolbar gives options to change the selection type. + """ + + def __init__(self): + gtk.HBox.__init__(self) + self.toolitems = [] + + readarticle_box = gtk.VBox() + readarticle_box.show() + + labeleb = gtk.EventBox() + labeleb.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse("#EEEEEE")) + readarticle_box.pack_start(labeleb, False, False, 0) + labeleb.show() + + self.articletitle = gtk.Label() + self.articletitle.set_justify(gtk.JUSTIFY_CENTER) + labeleb.add(self.articletitle) + self.articletitle.show() + + """ + Create reading and editing panels + """ + self.readarticle = Reading_View() + self.readarticle.set_size_request(gtk.gdk.screen_width()/2, -1) + self.readarticle.show() + readarticle_box.pack_start(self.readarticle) + self.pack_start(readarticle_box, False) + + self.editarticle = Editing_View() + self.pack_start(self.editarticle) + self.editarticle.show() + + """ Snap selection box """ + snap = ToolComboBox(label_text=_('Snap selection to:')) + snap.combo.append_item(0, _("Nothing")) + snap.combo.append_item(1, _("Sentences")) + snap.combo.append_item(2, _("Paragraphs")) + snap.combo.append_item(3, _("Sections")) + snap.combo.connect("changed", self.selection_mode_changed, None) + snap.combo.set_active(1) + self.toolitems.append(snap) + + """ + When highlighting text, while editing, different selection snap methods + can be used (characters, sentences, paragraphs and sections). Change the selection + mode based on user request + """ + def selection_mode_changed(self, widget, data): + current_selection = widget.get_active() + if current_selection == 0: + self.readarticle.set_full_edit_mode() + self.editarticle.set_full_edit_mode() + elif current_selection == 1: + self.readarticle.set_sentence_selection_mode() + self.editarticle.set_sentence_selection_mode() + elif current_selection == 2: + self.readarticle.set_paragraph_selection_mode() + self.editarticle.set_paragraph_selection_mode() + elif current_selection == 3: + self.readarticle.set_section_selection_mode() + self.editarticle.set_section_selection_mode() + #logger.debug(current_selection) + + def set_source_article(self, article): + self.articletitle.set_markup( + "<span size='medium'><b> %s </b> %s</span>" % \ + (_("Article:"), article.article_title)) + + if self.readarticle.textbox.get_article() != article: + self.readarticle.textbox.set_article(article) + + def set_working_article(self, article): + self.editarticle.articletitle.set_markup( + "<span size='medium'><b> %s </b> %s</span>" % \ + (_("Article:"), article.article_title)) + if self.editarticle.textbox.get_article() != article: + self.editarticle.textbox.set_article(article) diff --git a/infoslicer/widgets/Editable_Textbox.py b/infoslicer/widgets/Editable_Textbox.py new file mode 100644 index 0000000..fd8711f --- /dev/null +++ b/infoslicer/widgets/Editable_Textbox.py @@ -0,0 +1,294 @@ +# Copyright (C) IBM Corporation 2008
+import pygtk
+pygtk.require('2.0')
+import gtk
+import cPickle
+import pango
+import copy
+from Textbox import Textbox
+
+SNAP_SENTENCE, SNAP_PARAGRAPH, SNAP_SECTION, SNAP_NONE = range(4)
+
+class Editable_Textbox( Textbox ):
+ """
+ Created by Jonathan Mace
+ This class implements its own special code for dragging and selecting.
+ It has an article class which provides the text buffer, and any modifications
+ to the text buffer are done via the article class.
+ """
+
+ def __init__(self):
+ gtk.TextView.__init__(self)
+ self.set_border_width(1)
+ self.set_cursor_visible(True)
+ self.set_editable(True)
+ self.set_wrap_mode(gtk.WRAP_WORD)
+ self.article = None
+ self.set_mode(SNAP_SENTENCE)
+ self.changed = False
+ self.block = True
+
+ self.selecting = False
+ self.handlers = []
+ self.modify_font(pango.FontDescription('arial 9'))
+ self.ignore_snap_self = True
+ self.drag_source = False
+ self.edited = False
+ self.set_property("left-margin", 5)
+
+ def set_article(self, article):
+ self.article = article
+ self.set_buffer(article.getBuffer())
+
+ def get_article(self):
+ return self.article
+
+ def clear(self):
+ self.article.delete()
+
+ def get_mouse_iter(self, x, y):
+ click_coords = self.window_to_buffer_coords(gtk.TEXT_WINDOW_TEXT, x, y)
+ mouseClickPositionIter = self.get_iter_at_location(click_coords[0], click_coords[1])
+ return mouseClickPositionIter
+
+ def set_mode(self, snapto):
+ self.snapto = snapto
+
+ def set_buffer(self, buffer):
+ for handler in self.handlers:
+ self.disconnect(handler)
+
+ buffer.connect("changed", self.text_changed, None)
+ gtk.TextView.set_buffer(self, buffer)
+
+ self.handlers = []
+
+ self.handlers.append(self.connect("button-press-event", self.clicked_event, None))
+ self.handlers.append(self.connect("button-release-event", self.unclicked_event, None))
+ self.handlers.append(self.connect("drag_data_get", self.drag_data_get_event, None))
+ self.handlers.append(self.connect("drag_begin", self.drag_begin_event, None))
+ self.handlers.append(self.connect("drag-motion", self.drag_motion_event, None))
+ self.handlers.append(self.connect("drag-drop", self.drag_drop_event, None))
+ self.handlers.append(self.connect("drag-leave", self.drag_leave_event, None))
+ self.handlers.append(self.connect("drag-data-delete", self.drag_data_delete_event, None))
+ self.handlers.append(self.connect("drag_data_received", self.drag_data_received_event, None))
+ self.handlers.append(self.connect("drag-end", self.drag_end_event, None))
+ self.handlers.append(self.connect("motion-notify-event", self.motion_notify, None))
+ self.handlers.append(self.connect("focus-out-event", self.leave_notify, None))
+
+ def text_changed(self, buffer, data):
+ self.changed = True
+ self.selecting = False
+
+ def motion_notify(self, widget, event, data):
+ if not self.ignore_snap_self and self.selecting:
+ """ The following code implements the 'snapping' to sentences etc.
+
+ The base class responds to motion notify events and does some unknown (to me)
+ action which for some reason, must complete, otherwise on some platforms it will
+ stop any more motion notify events.
+
+ So what happens, is the first 'run through' of the motion notify responder emits another
+ motion notify event, ignores it and lets the base class respond to it. Then when control
+ is given back to the first emission, we implement our code. The order of events is:
+
+ 1) motion notify event 1 emitted naturally
+ 2) our class responds to motion notify event 1
+ 3) motion notify event 2 emitted by step 2)
+ 4) our class ignores motion notify event 2
+ 5) the default class acts upon motion notify event 2
+ 6) motion notify event 2 finishes naturally
+ 7) our class does its stuff
+ 8) motion notify event 1 finishes by our class stopping its emission
+
+ """
+
+
+ if self.block == True:
+ self.stop_emission("motion-notify-event")
+ self.block = False
+ self.emit("motion-notify-event", event)
+
+
+
+
+ buf = self.get_buffer()
+ mouseiter = self.get_mouse_iter(int(event.x), int(event.y))
+ article = self.get_article()
+
+ if mouseiter.compare(self.selectionstart) == 1:
+ if self.snapto == SNAP_SENTENCE:
+ selectionstart = article.getSentence(self.selectionstart).getStart()
+ selectionend = article.getSentence(mouseiter).getEnd()
+ if self.snapto == SNAP_PARAGRAPH:
+ selectionstart = article.getParagraph(self.selectionstart).getStart()
+ selectionend = article.getParagraph(mouseiter).getEnd()
+ if self.snapto == SNAP_SECTION:
+ selectionstart = article.getSection(self.selectionstart).getStart()
+ selectionend = article.getSection(mouseiter).getEnd()
+ else:
+ if self.snapto == SNAP_SENTENCE:
+ selectionstart = article.getSentence(mouseiter).getStart()
+ selectionend = article.getSentence(self.selectionstart).getEnd()
+ if self.snapto == SNAP_PARAGRAPH:
+ selectionstart = article.getParagraph(mouseiter).getStart()
+ selectionend = article.getParagraph(self.selectionstart).getEnd()
+ if self.snapto == SNAP_SECTION:
+ selectionstart = article.getSection(mouseiter).getStart()
+ selectionend = article.getSection(self.selectionstart).getEnd()
+ self.scroll_to_iter(mouseiter, 0)
+ article.highlight(selectionstart, selectionend)
+
+ else:
+ self.block = True
+
+ def clicked_event(self, widget, event, data):
+ if event.type == gtk.gdk._2BUTTON_PRESS or event.type == gtk.gdk._3BUTTON_PRESS:
+ self.stop_emission("button_press_event")
+ return
+ if event.button == 3:
+ self.stop_emission("button_press_event")
+ return
+ if self.changed == True:
+ buf = self.get_buffer()
+ article = self.get_article()
+
+ article.checkIntegrity()
+ self.changed = False
+ if not self.get_buffer().get_has_selection():
+ result = self.do_button_press_event(widget, event)
+
+ a = self.article
+ loc_iter = self.get_mouse_iter(int(event.x), int(event.y))
+
+ self.selecting = True
+ self.selectionstart = loc_iter
+ self.stop_emission("button-press-event")
+ return result
+ else:
+ buf = self.get_buffer()
+ bounds = buf.get_selection_bounds()
+ if bounds == ():
+ return
+ start = bounds[0]
+ end = bounds[1]
+ if start.compare(end) == 1:
+ start, end = end, start
+ loc = self.get_mouse_iter(int(event.x), int(event.y))
+ if start.compare(loc) == 1 or loc.compare(end) == 1:
+ self.do_button_press_event(widget, event)
+ a = self.article
+ self.selecting = True
+ self.selectionstart = loc
+ self.stop_emission("button-press-event")
+
+ def leave_notify(self, widget, event, data):
+ if self.changed == True:
+ offset = self.get_buffer().get_property("cursor-position")
+ self.article.checkIntegrity()
+ newbuf = self.article.getBuffer()
+ self.set_buffer(newbuf)
+ self.changed = False
+ iter = newbuf.get_iter_at_offset(offset)
+ newbuf.place_cursor(iter)
+
+ def unclicked_event(self, widget, event, data):
+ if self.snapto != SNAP_NONE:
+ self.article.clearArrow()
+ self.do_button_release_event(widget, event)
+ self.selecting = False
+ return True
+ else:
+ return False
+
+ def drag_begin_event(self, widget, context, data):
+ self.grab_focus()
+ if self.snapto != SNAP_NONE:
+ a = self.article
+ a.rememberSelection()
+ self.drag_source = True
+
+ def drag_drop_event(self, widget, context, x, y, time, data):
+ if self.snapto != SNAP_NONE:
+ self.article.clearArrow()
+ self.set_cursor_visible(True)
+
+ def drag_motion_event(self, widget, drag_context, x, y, time, data):
+ if self.snapto != SNAP_NONE and not self.ignore_snap_self or (not self.drag_source and self.ignore_snap_self):
+ self.delete_on_fail = False
+ self.set_cursor_visible(False)
+ a = self.article
+ loc_iter = self.get_mouse_iter(x, y)
+
+ if self.snapto == SNAP_SENTENCE:
+ a.mark(a.getBestSentence(loc_iter).getStart())
+ #a.markSentence(loc_iter)
+ if self.snapto == SNAP_PARAGRAPH:
+ a.mark(a.getBestParagraph(loc_iter).getStart())
+ #a.markParagraph(loc_iter)
+ if self.snapto == SNAP_SECTION:
+ a.mark(a.getBestSection(loc_iter).getStart())
+ #a.markSection(loc_iter)
+
+ result = self.do_drag_motion(widget, drag_context, x, y, time)
+ self.stop_emission("drag-motion")
+ return result
+ self.changed = False
+ else:
+ self.set_cursor_visible(True)
+ self.drag_source = True
+
+
+ def drag_leave_event(self, widget, context, time, data):
+ if self.snapto != SNAP_NONE and not self.ignore_snap_self or (not self.drag_source and self.ignore_snap_self):
+ self.delete_on_fail = True
+ self.article.clearArrow()
+ self.do_drag_leave(widget, context, time)
+ self.stop_emission("drag-leave")
+ self.changed = False
+ self.set_cursor_visible(True)
+
+ def drag_data_delete_event(self, widget, context, data):
+ if self.snapto != SNAP_NONE and not self.ignore_snap_self or (not self.drag_source and self.ignore_snap_self):
+ a = self.article
+ a.deleteDragSelection()
+ self.stop_emission("drag-data-delete")
+ self.changed = False
+
+ def drag_data_received_event(self, widget, context, x, y, selection_data, info, time, data):
+ if self.snapto != SNAP_NONE and not self.ignore_snap_self or (not self.drag_source and self.ignore_snap_self):
+ a = self.article
+ insert_loc = self.get_mouse_iter(x, y)
+ data_received_type = str(selection_data.type)
+ data = cPickle.loads(str(selection_data.data))
+
+ if data_received_type == "sentence":
+ bestpoint = insert_loc
+ if data_received_type == "paragraph":
+ bestpoint = a.getBestParagraph(insert_loc).getStart()
+ if data_received_type == "section":
+ bestpoint = a.getBestSection(insert_loc).getStart()
+
+ a.insert(data, insert_loc)
+
+ self.stop_emission("drag-data-received")
+ context.finish(True, True, time)
+ self.grab_focus()
+
+ def drag_data_get_event(self, widget, context, selection_data, info, time, data):
+ if not self.ignore_snap_self and self.snapto != SNAP_NONE:
+ a = self.article
+
+ if self.snapto == SNAP_SENTENCE:
+ atom = gtk.gdk.atom_intern("sentence")
+ if self.snapto == SNAP_PARAGRAPH:
+ atom = gtk.gdk.atom_intern("paragraph")
+ if self.snapto == SNAP_SECTION:
+ atom = gtk.gdk.atom_intern("section")
+
+ string = cPickle.dumps(a.getSelection())
+ selection_data.set(atom, 8, string)
+ self.stop_emission("drag-data-get")
+
+ def drag_end_event(self, widget, context, data):
+ self.drag_source = False
diff --git a/infoslicer/widgets/Editing_View.py b/infoslicer/widgets/Editing_View.py new file mode 100644 index 0000000..5506a7f --- /dev/null +++ b/infoslicer/widgets/Editing_View.py @@ -0,0 +1,50 @@ +# Copyright (C) IBM Corporation 2008
+import pygtk
+pygtk.require('2.0')
+import gtk
+from Editable_Textbox import Editable_Textbox
+
+class Editing_View( gtk.VBox ):
+ """
+ Created by Jonathan Mace
+ This class wraps an editable textbox into a scrollable window and
+ gives it a title.
+ """
+ def __init__(self):
+ gtk.VBox.__init__(self)
+ self.set_border_width(0)
+ self.set_spacing(2)
+
+ labeleb = gtk.EventBox()
+ labeleb.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse("#EEEEEE"))
+ self.pack_start(labeleb, False, False, 0)
+ labeleb.show()
+
+ self.articletitle = gtk.Label()
+ self.articletitle.set_justify(gtk.JUSTIFY_CENTER)
+ labeleb.add(self.articletitle)
+ self.articletitle.show()
+
+ self.textwindow = gtk.ScrolledWindow()
+ self.textwindow.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
+ self.pack_start(self.textwindow)
+ self.textwindow.show()
+
+ self.textbox = Editable_Textbox()
+ self.textwindow.add(self.textbox)
+ self.textbox.show()
+
+ def set_sentence_selection_mode(self):
+ self.textbox.set_mode(0)
+
+ def set_paragraph_selection_mode(self):
+ self.textbox.set_mode(1)
+
+ def set_section_selection_mode(self):
+ self.textbox.set_mode(2)
+
+ def set_full_edit_mode(self):
+ self.textbox.set_mode(3)
+
+ def clear_contents(self):
+ self.textbox.clear()
diff --git a/infoslicer/widgets/Format_Pane.py b/infoslicer/widgets/Format_Pane.py new file mode 100644 index 0000000..ef8c2f5 --- /dev/null +++ b/infoslicer/widgets/Format_Pane.py @@ -0,0 +1,54 @@ +# Copyright (C) IBM Corporation 2008 +import pygtk +pygtk.require('2.0') +import gtk +from gettext import gettext as _ + +from Editing_View import Editing_View + +class Format_Pane(Editing_View): + """ + Created by Jonathan Mace + + See __init__.py for overview of panes. + + The Format Pane shows only the current edit article. + Users can apply formatting such as bold, underline etc. + Formatting has currently not been implemented. Dummy buttons are on the toolbar. + """ + + def __init__(self): + Editing_View.__init__(self) + self.toolitems = [] + + """ + self.combocontainer = gtk.ToolItem() + self.combocontainer.add(self.combobox) + self.toolbar.insert(self.combocontainer, -1) + self.combocontainer.show() + + self.boldbutton = gtk.ToolButton(gtk.STOCK_BOLD) + self.boldbutton.set_expand(False) + self.toolbar.insert(self.boldbutton, -1) + self.boldbutton.show() + + self.italicbutton = gtk.ToolButton(gtk.STOCK_ITALIC) + self.italicbutton.set_expand(False) + self.toolbar.insert(self.italicbutton, -1) + self.italicbutton.show() + + self.underlinebutton = gtk.ToolButton(gtk.STOCK_UNDERLINE) + self.underlinebutton.set_expand(False) + self.toolbar.insert(self.underlinebutton, -1) + self.underlinebutton.show() + """ + + def set_source_article(self, article): + self.source = article + + def set_working_article(self, article): + self.articletitle.set_markup( + "<span size='medium'><b> %s </b> %s</span>" % \ + (_("Article:"), article.article_title)) + if self.textbox.get_article() != article: + self.textbox.set_article(article) diff --git a/infoslicer/widgets/Gallery_View.py b/infoslicer/widgets/Gallery_View.py new file mode 100644 index 0000000..4464088 --- /dev/null +++ b/infoslicer/widgets/Gallery_View.py @@ -0,0 +1,177 @@ +# Copyright (C) IBM Corporation 2008
+import pygtk
+pygtk.require('2.0')
+import gtk
+import os
+import cPickle
+import logging
+
+from Editable_Textbox import Editable_Textbox
+from infoslicer.processing.Article_Data import *
+from infoslicer.processing.Article import Article
+import book
+
+logger = logging.getLogger('infoslicer')
+
+class Gallery_View( gtk.HBox ):
+ """
+ Created by Christopher Leonard
+ Drag-and-drop methods added by Jonathan Mace
+
+ The gallery view acts in the same was as the Reading_View
+ except instead of displaying the text of an article, it
+ displays the images associated with that article, in a scrollable display.
+
+
+ Drag-and-drop methods have been added to set up the images as a drag
+ source.
+ The data returned by drag-data-get will be a list containing
+ an Image_Data object and a Sentence_Data object.
+ These correspond to the image
+ and caption respectively.
+ """
+
+ def __init__(self):
+ self.image_list = []
+ gtk.HBox.__init__(self)
+
+ self.current_index = -1
+
+ left_button = gtk.Button(label="\n\n << \n\n")
+
+ right_button = gtk.Button(label="\n\n >> \n\n")
+
+ self.imagenumberlabel = gtk.Label()
+
+ self.image = gtk.Image()
+
+ self.imagebox = gtk.EventBox()
+ self.imagebox.add(self.image)
+
+ self.imagebox.drag_source_set(gtk.gdk.BUTTON1_MASK, [("text/plain", gtk.TARGET_SAME_APP, 80)], gtk.gdk.ACTION_COPY)
+ self.imagebox.connect("drag-begin", self.drag_begin_event, None)
+ self.imagebox.connect("drag-data-get", self.drag_data_get_event, None)
+
+ self.caption = gtk.Label("")
+ self.caption.set_line_wrap(True)
+
+ self.image_drag_container = gtk.VBox()
+ self.image_drag_container.pack_start(self.imagenumberlabel, expand = False)
+ self.image_drag_container.pack_start(self.imagebox, expand=False)
+ self.image_drag_container.pack_start(self.caption, expand=False)
+
+ image_container = gtk.VBox()
+ image_container.pack_start(gtk.Label(" "))
+ image_container.pack_start(self.image_drag_container, expand=False)
+ image_container.pack_start(gtk.Label(" "))
+
+ left_button_container = gtk.VBox()
+ left_button_container.pack_start(gtk.Label(" "))
+ left_button_container.pack_start(left_button, expand=False)
+ left_button_container.pack_start(gtk.Label(" "))
+
+ right_button_container = gtk.VBox()
+ right_button_container.pack_start(gtk.Label(" "))
+ right_button_container.pack_start(right_button, expand=False)
+ right_button_container.pack_start(gtk.Label(" "))
+
+
+ self.pack_start(left_button_container, expand=False)
+ self.pack_start(image_container)
+ self.pack_start(right_button_container, expand=False)
+
+ self._source_article = None
+ self.show_all()
+ right_button.connect("clicked", self.get_next_item, None)
+ left_button.connect("clicked", self.get_prev_item, None)
+ self.get_next_item(right_button, None)
+
+ self.source_article_id = 0
+
+ def get_next_item(self, button, param):
+ if self.image_list == []:
+ if self._source_article and self._source_article.article_title:
+ self.caption.set_text("This article does not have any images")
+ else:
+ self.caption.set_text("Please select a Wikipedia article from the menu above")
+ self.image.clear()
+ return
+ self.current_index += 1
+ if self.current_index == len(self.image_list):
+ self.current_index = 0
+ self.imagebuf = gtk.gdk.pixbuf_new_from_file(self.image_list[self.current_index][0])
+ self.image.set_from_pixbuf(self.imagebuf)
+ self.caption.set_text("\n" + self.image_list[self.current_index][1])
+ self.imagenumberlabel.set_text("(%d / %d)\n" % (self.current_index+1, len(self.image_list)))
+
+ def get_prev_item(self, button, param):
+ if self.image_list == []:
+ if self._source_article and self._source_article.article_title:
+ self.caption.set_text("This article does not have any images")
+ else:
+ self.caption.set_text("Please select a Wikipedia article from the menu above")
+ self.image.clear()
+ return
+ if self.current_index == 0:
+ self.current_index = len(self.image_list)
+ self.current_index -= 1
+ self.imagebuf = gtk.gdk.pixbuf_new_from_file(self.image_list[self.current_index][0])
+ self.image.set_from_pixbuf(self.imagebuf)
+ self.caption.set_text("\n" + self.image_list[self.current_index][1])
+ self.imagenumberlabel.set_text("(%d / %d)\n" % (self.current_index+1, len(self.image_list)))
+
+ def get_first_item(self):
+ if self.image_list == []:
+ if self._source_article and self._source_article.article_title:
+ self.caption.set_text("This article does not have any images")
+ else:
+ self.caption.set_text("Please select a Wikipedia article from the menu above")
+ self.image.clear()
+ return
+ self.current_index = 0
+ self.imagebuf = gtk.gdk.pixbuf_new_from_file(self.image_list[self.current_index][0])
+ self.image.set_from_pixbuf(self.imagebuf)
+ self.caption.set_text("\n" + self.image_list[self.current_index][1])
+ logger.debug("setting text to:")
+ logger.debug("(%d / %d)\n" %
+ (self.current_index+1, len(self.image_list)))
+ self.imagenumberlabel.set_text("(%d / %d)\n" % (self.current_index+1, len(self.image_list)))
+
+ def set_image_list(self, image_list):
+ logger.debug("validagting image list")
+ self.image_list = _validate_image_list(book.wiki.root, image_list)
+ logger.debug(self.image_list)
+
+ def drag_begin_event(self, widget, context, data):
+ self.imagebox.drag_source_set_icon_pixbuf(self.imagebuf)
+
+ def drag_data_get_event(self, widget, context, selection_data, info, timestamp, data):
+ logger.debug("getting data")
+ atom = gtk.gdk.atom_intern("section")
+ imagedata = Picture_Data(self.source_article_id,
+ self.image_list[self.current_index][0],
+ self.image_list[self.current_index][2])
+ captiondata = Sentence_Data(0, self.source_article_id, 0, 0, 0, self.image_list[self.current_index][1])
+ paragraph1data = Paragraph_Data(0, self.source_article_id, 0, 0, [imagedata])
+ paragraph2data = Paragraph_Data(0, self.source_article_id, 0, 0, [captiondata])
+ sectionsdata = [Section_Data(0, self.source_article_id, 0, [paragraph1data, paragraph2data])]
+ string = cPickle.dumps(sectionsdata)
+ selection_data.set(atom, 8, string)
+
+def _validate_image_list(root, image_list):
+ """
+ provides a mechanism for validating image lists and expanding relative paths
+ @param image_list: list of images to validate
+ @return: list of images with corrected paths, and broken images removed
+ """
+ for i in xrange(len(image_list)):
+ if not os.access(image_list[i][0], os.F_OK):
+ if os.access(os.path.join(root, image_list[i][0]), os.F_OK):
+ image_list[i] = (os.path.join(root, image_list[i][0]),
+ image_list[i][1], image_list[i][2])
+ else:
+ image = None
+ #removing during for loop was unreliable
+ while None in image_list:
+ image_list.remove(None)
+ return image_list
diff --git a/infoslicer/widgets/Image_Pane.py b/infoslicer/widgets/Image_Pane.py new file mode 100644 index 0000000..99026f0 --- /dev/null +++ b/infoslicer/widgets/Image_Pane.py @@ -0,0 +1,90 @@ +# Copyright (C) IBM Corporation 2008 +import pygtk +pygtk.require('2.0') +import gtk +import logging +from gettext import gettext as _ + +from Editing_View import Editing_View +from Gallery_View import Gallery_View +from infoslicer.processing.Article import Article + +logger = logging.getLogger('infoslicer') + +class Image_Pane(gtk.HBox): + """ + Created by Christopher Leonard + + See __init__.py for overview of panes. + + The Image Pane gives a side-by-side view of the source article and edit article. + The left hand side shows images in the source article. These can be dragged into + the edit article. + """ + + def __init__(self): + gtk.HBox.__init__(self) + self.toolitems = [] + + gallery_box = gtk.VBox() + gallery_box.show() + + labeleb = gtk.EventBox() + labeleb.modify_bg(gtk.STATE_NORMAL, gtk.gdk.color_parse("#EEEEEE")) + gallery_box.pack_start(labeleb, False, False, 0) + labeleb.show() + + self.articletitle = gtk.Label() + self.articletitle.set_justify(gtk.JUSTIFY_CENTER) + labeleb.add(self.articletitle) + self.articletitle.show() + + self.gallery = Gallery_View() + self.gallery.set_size_request(gtk.gdk.screen_width()/2, -1) + gallery_box.pack_start(self.gallery) + + self.pack_start(gallery_box, False) + self.editarticle = Editing_View() + self.pack_start(self.editarticle) + self.editarticle.show_all() + + self.gallery._source_article = None + + def set_source_article(self, source): + self.articletitle.set_markup( + "<span size='medium'><b> %s </b> %s</span>"% \ + (_("Article:"), source.article_title)) + + if self.gallery._source_article == source: + return + + logger.debug("source received. title: %s" % source.article_title) + current = self.gallery._source_article + self.gallery._source_article = source + + if source and source.article_title: + self.gallery.current_index = 0 + if source.image_list != []: + logger.debug("setting images") + self.gallery.set_image_list(source.image_list) + self.gallery.get_first_item() + + self.gallery.source_article_id = source.source_article_id + logger.debug(source.image_list) + else: + self.gallery.imagenumberlabel.set_label("") + self.gallery.image.clear() + self.gallery.caption.set_text(_("This article does not have any images")) + else: + self.gallery.imagenumberlabel.set_label("") + self.gallery.caption.set_text(_("Please select a Wikipedia article from the menu above")) + + def set_working_article(self, article): + logger.debug("working received, title %s" % article.article_title) + + self.editarticle.articletitle.set_markup( + "<span size='medium'><b> %s </b> %s</span>"% \ + (_("Article:"), article.article_title)) + + if self.editarticle.textbox.get_article() != article: + self.editarticle.textbox.set_article(article) diff --git a/infoslicer/widgets/Reading_View.py b/infoslicer/widgets/Reading_View.py new file mode 100644 index 0000000..55609c9 --- /dev/null +++ b/infoslicer/widgets/Reading_View.py @@ -0,0 +1,49 @@ +# Copyright (C) IBM Corporation 2008
+import pygtk
+pygtk.require('2.0')
+import gtk
+from Readonly_Textbox import Readonly_Textbox
+import logging
+
+logger = logging.getLogger('infoslicer')
+elogger = logging.getLogger('infoslicer::except')
+
+class Reading_View( gtk.VBox ):
+ """
+ Created by Jonathan Mace
+
+ This class wraps a Readonly_Textbox in a scrollable window, and adds
+ a combobox.
+ The combobox is populated, externally, with the names of
+ articles which can be selected.
+ If an article is selected in the combobox, the readonly_textbox will
+ be set to display the newly selected article.
+ """
+
+ def __init__(self):
+ gtk.VBox.__init__(self)
+
+ self.articlewindow = gtk.ScrolledWindow()
+ self.articlewindow.set_policy(gtk.POLICY_NEVER, gtk.POLICY_AUTOMATIC)
+ self.pack_start(self.articlewindow)
+ self.articlewindow.show()
+
+ self.textbox = Readonly_Textbox()
+ self.articlewindow.add(self.textbox)
+ self.textbox.show()
+
+ def set_sentence_selection_mode(self):
+ self.textbox.set_mode(0)
+
+ def set_paragraph_selection_mode(self):
+ self.textbox.set_mode(1)
+
+ def set_section_selection_mode(self):
+ self.textbox.set_mode(2)
+
+ def set_full_edit_mode(self):
+ self.textbox.set_mode(3)
+
+ def clear_contents(self):
+ self.textbox.clear()
+
diff --git a/infoslicer/widgets/Readonly_Textbox.py b/infoslicer/widgets/Readonly_Textbox.py new file mode 100644 index 0000000..958cfcd --- /dev/null +++ b/infoslicer/widgets/Readonly_Textbox.py @@ -0,0 +1,169 @@ +# Copyright (C) IBM Corporation 2008
+import pygtk
+pygtk.require('2.0')
+import gtk
+import pango
+import cPickle
+from Textbox import Textbox
+
+SELECT_SENTENCE, SELECT_PARAGRAPH, SELECT_SECTION, FULL_EDIT = range(4)
+
+class Readonly_Textbox( Textbox ):
+ """
+ Created by Jonathan Mace
+ This class implements its own special code for dragging and selecting.
+ It has an article class which provides the text buffer, and any modifications
+ to the text buffer are done via the article class.
+ This class is read-only, so it is not editable and will not act as a drag
+ destination.
+ """
+
+ def __init__(self, use_as_drag_source = True):
+ Textbox.__init__(self)
+ self.selecting = False
+ self.use_as_drag_source = use_as_drag_source
+ self.set_mode(SELECT_SENTENCE)
+ self.block = True
+ self.modify_font(pango.FontDescription('arial 9'))
+
+
+ def set_mode(self, mode):
+ self.selectionmode = mode
+ self.disconnect_handlers()
+ if mode == SELECT_SENTENCE: self.__set_select_mode()
+ elif mode == SELECT_PARAGRAPH: self.__set_select_mode()
+ elif mode == SELECT_SECTION: self.__set_select_mode()
+ else: pass
+
+ def __set_select_mode(self):
+ if self.use_as_drag_source == True:
+ self.event_handlers.append(self.connect("button-press-event", self.clicked_event, None))
+ self.event_handlers.append(self.connect("motion-notify-event", self.motion_notify, None))
+ self.event_handlers.append(self.connect("move-cursor", self.move_cursor, None))
+ self.event_handlers.append(self.connect("button-release-event", self.unclicked_event, None))
+ self.event_handlers.append(self.connect("drag_data_get", self.drag_data_get_event, None))
+ self.event_handlers.append(self.connect("drag-motion", self.drag_motion, None))
+
+ def drag_motion(self, widget, context, x, y, timestamp, data):
+ context.drag_status(gtk.gdk.ACTION_COPY, timestamp)
+ return True
+
+ def clicked_event(self, widget, event, data):
+ if event.type == gtk.gdk._2BUTTON_PRESS or event.type == gtk.gdk._3BUTTON_PRESS:
+ self.stop_emission("button_press_event")
+ return
+ if event.button == 3:
+ self.stop_emission("button_press_event")
+ return
+ if not self.get_buffer().get_has_selection():
+ result = self.do_button_press_event(widget, event)
+
+ a = self.article
+ loc_iter = self.get_mouse_iter(int(event.x), int(event.y))
+
+ self.selecting = True
+ self.selectionstart = loc_iter
+ self.stop_emission("button-press-event")
+ return result
+ else:
+ buf = self.get_buffer()
+ bounds = buf.get_selection_bounds()
+ if bounds == ():
+ return
+ start = bounds[0]
+ end = bounds[1]
+ if start.compare(end) == 1:
+ start, end = end, start
+ loc = self.get_mouse_iter(int(event.x), int(event.y))
+ if start.compare(loc) == 1 or loc.compare(end) == 1:
+ self.do_button_press_event(widget, event)
+ a = self.article
+ self.selecting = True
+ self.selectionstart = loc
+ self.stop_emission("button-press-event")
+
+ def move_cursor(self, widget, stepsize, count, extend, data):
+ if self.selecting:
+ result = self.do_move_cursor(widget, event)
+ self.stop_emission("move-cursor")
+ return result
+
+ def motion_notify(self, widget, event, data):
+ if self.selecting:
+ """ The following code implements the 'snapping' to sentences etc.
+
+ The base class responds to motion notify events and does some unknown (to me)
+ action which for some reason, must complete, otherwise on some platforms it will
+ stop any more motion notify events.
+
+ So what happens, is the first 'run through' of the motion notify responder emits another
+ motion notify event, ignores it and lets the base class respond to it. Then when control
+ is given back to the first emission, we implement our code. The order of events is:
+
+ 1) motion notify event 1 emitted naturally
+ 2) our class responds to motion notify event 1
+ 3) motion notify event 2 emitted by step 2)
+ 4) our class ignores motion notify event 2
+ 5) the default class acts upon motion notify event 2
+ 6) motion notify event 2 finishes naturally
+ 7) our class does its stuff
+ 8) motion notify event 1 finishes by our class stopping its emission
+
+ """
+
+ if self.block == True:
+ self.stop_emission("motion-notify-event")
+ self.block = False
+ self.emit("motion-notify-event", event)
+
+ buf = self.get_buffer()
+ mouseiter = self.get_mouse_iter(int(event.x), int(event.y))
+ article = self.get_article()
+ if mouseiter.compare(self.selectionstart) == 1:
+ if self.selectionmode == SELECT_SENTENCE:
+ selectionstart = article.getSentence(self.selectionstart).getStart()
+ selectionend = article.getSentence(mouseiter).getEnd()
+ if self.selectionmode == SELECT_PARAGRAPH:
+ selectionstart = article.getParagraph(self.selectionstart).getStart()
+ selectionend = article.getParagraph(mouseiter).getEnd()
+ if self.selectionmode == SELECT_SECTION:
+ selectionstart = article.getSection(self.selectionstart).getStart()
+ selectionend = article.getSection(mouseiter).getEnd()
+ else:
+ if self.selectionmode == SELECT_SENTENCE:
+ selectionstart = article.getSentence(mouseiter).getStart()
+ selectionend = article.getSentence(self.selectionstart).getEnd()
+ if self.selectionmode == SELECT_PARAGRAPH:
+ selectionstart = article.getParagraph(mouseiter).getStart()
+ selectionend = article.getParagraph(self.selectionstart).getEnd()
+ if self.selectionmode == SELECT_SECTION:
+ selectionstart = article.getSection(mouseiter).getStart()
+ selectionend = article.getSection(self.selectionstart).getEnd()
+ self.scroll_to_iter(mouseiter, 0)
+ article.highlight(selectionstart, selectionend)
+
+ else:
+ self.block = True
+
+ def unclicked_event(self, widget, event, data):
+ self.article.clearArrow()
+ self.do_button_release_event(widget, event)
+ self.selecting = False
+ self.stop_emission("button-release-event")
+
+ def drag_data_get_event(self, widget, context, selection_data, info, time, data):
+
+ a = self.article
+
+ if self.selectionmode == SELECT_SENTENCE:
+ atom = gtk.gdk.atom_intern("sentence")
+ if self.selectionmode == SELECT_PARAGRAPH:
+ atom = gtk.gdk.atom_intern("paragraph")
+ if self.selectionmode == SELECT_SECTION:
+ atom = gtk.gdk.atom_intern("section")
+
+ string = cPickle.dumps(a.getSelection())
+ selection_data.set(atom, 8, string)
+ self.stop_emission("drag-data-get")
+ self.set_editable(False)
+
diff --git a/infoslicer/widgets/Textbox.py b/infoslicer/widgets/Textbox.py new file mode 100644 index 0000000..95f0681 --- /dev/null +++ b/infoslicer/widgets/Textbox.py @@ -0,0 +1,56 @@ +# Copyright (C) IBM Corporation 2008
+import pygtk
+pygtk.require('2.0')
+import gtk
+import cPickle
+import pango
+
+SELECT_SENTENCE, SELECT_PARAGRAPH, SELECT_SECTION, FULL_EDIT = range(4)
+
+class Textbox( gtk.TextView ):
+ """
+ Created by Jonathan Mace
+ The Textbox class is the base class for our own custom textboxes which implement
+ the snapping to sentences/paragraphs/sections. The two subclasses are:
+ Editable_Textbox - this is a textbox with full editing features
+ Readonly_Textbox - this textbox is not editable and will not respond to
+ drags.
+ """
+
+
+ def __init__(self):
+ gtk.TextView.__init__(self)
+ self.set_border_width(1)
+ self.event_handlers = []
+ self.set_wrap_mode(gtk.WRAP_WORD)
+ self.set_cursor_visible(False)
+ self.set_editable(False)
+ self.modify_font(pango.FontDescription('arial 9'))
+ self.article = None
+ self.set_property("left-margin", 5)
+
+ def set_article(self, article):
+ self.article = article
+ self.set_buffer(article.getBuffer())
+
+ def get_article(self):
+ return self.article
+
+ def show(self):
+ gtk.TextView.show(self)
+
+ def clear(self):
+ self.article.delete()
+
+ def disconnect_handlers(self):
+ self.set_editable(False)
+ self.set_cursor_visible(False)
+ for handler in self.event_handlers:
+ self.disconnect(handler)
+ self.event_handlers = []
+
+ def get_mouse_iter(self, x, y):
+ # Convenience method to get the iter in the buffer of x, y coords.
+ click_coords = self.window_to_buffer_coords(gtk.TEXT_WINDOW_TEXT, x, y)
+ mouseClickPositionIter = self.get_iter_at_location(click_coords[0], click_coords[1])
+ return mouseClickPositionIter
\ No newline at end of file diff --git a/infoslicer/widgets/__init__.py b/infoslicer/widgets/__init__.py new file mode 100644 index 0000000..533d012 --- /dev/null +++ b/infoslicer/widgets/__init__.py @@ -0,0 +1,21 @@ +# Copyright (C) IBM Corporation 2008 +""" +Every class of type *_Pane has the following. +Thank python for not having interfaces. + +pane.panel +pane.toolbar + +These correspond to the main view and toolbar associated with this pane. + +set_source_article +get_source_article +set_working_article +get_working_article + +The GUI passes the currently selected source and working articles between panes +when panes are switched. The pane will always be given an article using +set_source_article before the get_source_article method is called. Thus it is +feasible to just save the article argument and return it in the get method. + +""" |