Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
projects:hack_ucsc_2015 [2015-01-18 23:44] jbergamini@jeff.cis.cabrillo.eduprojects:hack_ucsc_2015 [2015-01-22 13:05] jbergamini@jeff.cis.cabrillo.edu
Line 5: Line 5:
 ====== cccPlan ====== ====== cccPlan ======
  
-{{:projects:cccplan.png?nolink|cccPlan screenshot}}+{{:projects:cccplan.png?direct|cccPlan screenshot}} 
 + 
 +cccPlan aggregates information from [[http://assist.org|assist.org]] and individual community colleges to present a broader and more informative list of transfer-level courses between colleges.
  
 Team members: Chad Benson, Jade Keller, William Ritson, Pei Jie Sim, Pei Yong Sim Team members: Chad Benson, Jade Keller, William Ritson, Pei Jie Sim, Pei Yong Sim
  
-{{:projects:cccplanteam.jpg?direct&400|Team members: Chad Benson, Jade Keller, William Ritson, Pei Jie Sim, Pei Yong Sim}}+{{:projects:cccplanteam.jpg?direct&400|cccPlan team members: William Ritson, Pei Yong Sim, Chad Benson, Jade Keller, Pei Jie Sim}}
  
 ===== GitHub repositories and other sites ===== ===== GitHub repositories and other sites =====
Line 22: Line 24:
 {{:projects:clearskin.png?direct&400|Clear Skin screenshot}} {{:projects:clearskin.png?direct&400|Clear Skin screenshot}}
  
-We are building a database for skincare products that allows the user to search for products by ingredients. We utilized Kimono Lab's chrome extension to build custom API's used to scrape websites for ingredients. We used SQLite for the database itself, PHP on the server, and AJAX. +We are building a database for skincare products that allows the user to search for products by ingredients. We utilized Kimono Lab's chrome extension to build custom API's used to scrape websites for ingredients. We used SQLite for the database itself, PHP on the server, and AJAX. 
  
-Team members: Christopher Chen, Bruno Hernandez, Nikolas Payne, Francisco Piva+Team members: Christopher Chen, Francisco Piva, Nikolas Payne, Bruno Hernandez
  
-{{:projects:clearskinteam.jpg?direct&400|Team members: Christopher Chen, Bruno Hernandez, Francisco Piva, Nikolas Payne}}+{{:projects:clearskinteam.jpg?direct&400|Clear Skin team members: Christopher Chen, Bruno Hernandez, Francisco Piva, Nikolas Payne}}
  
 ===== GitHub repositories and other sites ===== ===== GitHub repositories and other sites =====
  
   * [[https://github.com/embasa/SkinCare|GitHub repository]]   * [[https://github.com/embasa/SkinCare|GitHub repository]]
-  * [[https://www.chrisdesigns.net/skincity/index.php|Online hosted version]]+  * [[http://skin.chrisdesigns.net/|Online hosted version]]
  
 ====== What's Missing ====== ====== What's Missing ======
Line 37: Line 39:
 {{:projects:whatsmissing.png?direct&400|What's Missing screenshot}} {{:projects:whatsmissing.png?direct&400|What's Missing screenshot}}
  
-We made a media-analysis tool that takes a primary source and an article and shows you exactly how much of the primary source the article's author failed to include.+We made a media-analysis tool that takes a primary source and an article and shows you exactly how much of the primary source the article's author failed to include.
  
-It finds the greatest common subsequences between the texts by creating a hash table with the location of all the words in the primary source. It uses this hash to guess where all the possible sub-sequences might start and then it iterates forward and compares how much the sequences have in common with each-other using a scoring mechanism that takes into account how long the words are and how different (we used something called the Levenshtein distance to give each word a rating.) It has a few warts and looks like something we came up with in a weekend, but it runs fast and works exactly like we thought it would. We even caught some biased reporting - look at our screenshot!+It finds the greatest common subsequences between the texts by creating a hash table with the location of all the words in the primary source. It uses this hash to guess where all the possible sub-sequences might start and then it iterates forward and compares how much the sequences have in common with each-other using a scoring mechanism that takes into account how long the words are and how different (we used something called the Levenshtein distance to give each word a rating.) It has a few warts and looks like something we came up with in a weekend, but it runs fast and works exactly like we thought it would. We even caught some biased reporting - look at our screenshot!
  
 Team members: Bradley Lacombe, Will Mosher, Julya Wacha Team members: Bradley Lacombe, Will Mosher, Julya Wacha
  
-{{:projects:whatsmissingteam.jpg?direct&400|Team members: Team members: Bradley Lacombe, Will Mosher, Julya Wacha}}+{{:projects:whatsmissingteam.jpg?direct&400|What's missing team members: Julya Wacha, Will Mosher, Bradley Lacombe}}
  
 ===== GitHub repositories and other sites ===== ===== GitHub repositories and other sites =====
  
   * [[https://github.com/WillMosher/hack-ucsc-project/|GitHub repository]]   * [[https://github.com/WillMosher/hack-ucsc-project/|GitHub repository]]
 
projects/hack_ucsc_2015.txt · Last modified: 2022-11-14 23:30 by 127.0.0.1 · []
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki