Gamecock Computing Research Symposium

Friday, September 26, 2014 - 02:00 pm
Swearingen Engineering Center

Come and see the research of our faculty and graduate students. Poster session is at 2:00pm in the Swearingen atrium. At 3:00pm there will be Awards and Faculty Presentations in Amoco Hall.

Super-Resolution Image Reconstruction

Wednesday, September 24, 2014 - 02:00 pm
Swearingen 1A03 (Faculty Lounge)
COLLOQUIUM Dipti Patra Electrical Engineering Department National Institute of Technology Rourkela Abstract Image reconstruction is a mathematical process to retrieve information that has been lost or obscured in the imaging process. The cause of degradation in the imaging process is mainly due to optical distortion, motion blur due to limited shutter speed, environmental noise and aliasing effects. In contrast to image enhancement, where the appearance of an image is improved to suit human subjective preferences, image reconstruction is an objective approach to recover the image based on mathematical and statistical models of image degradation. However, enhancement of the resolution of the reconstructed image is a key requirement to improve both pictorial information for human interpretation and representation for automatic machine perception. High resolution refers to high pixel density. Images with high pixel density offer important and critical information in various practical applications. Super Resolution (SR) reconstruction is one of the software level solutions for the enhancement of the spatial resolution of the reconstructed image. The term "super" in super resolution signifies that the technique can overcome the inherent resolution limitation of LR imaging systems. It works by the fusion of non-redundant information contained in single or multiple low resolution images of the same source image with sub-pixel shifts. SR reconstruction overcomes the limitations associated with hardware implementations along with of theory of optics to increase the resolution of an image. This method has a wide range of applications such as: surveillance video, remote sensing, medical imaging and video standard conversion. This talk is to provide a review of the current state of research on super-resolution reconstruction with some potential future directions. Dr. Dipti Patra is an Associate Professor in the Electrical Engineering Department at the National Institute of Technology in Rourkela, India. She obtained the Ph.D. degree in Electrical Engineering from the same institution in 2006. Her major research focus areas are Digital Signal (especially Image and Video) Processing, Computer Vision, and Stochastic Processes. She is a Senior Member of IEEE, Fellow of Institution of Electronics and Telecom Engineers, India, Fellow of The Institution of Engineers, India. She has published 55 research papers in national or international refereed journals and conference proceedings. She has been the reviewer of many international journals such as: IET Image Processing (IET), Systems and Information Sciences Elsevier), Mathematical Problems in Engineering, Arabian Journal in Science & Engineering (Springer), Computer Journal (Oxford University Press), and Hindwai Journals. She has worked as a Program Committee Member of many IEEE International Conferences, e.g. Pattern Recognition & Machine Intelligence 2013. Currently (September 13-16), she is a visitor at the University of South Carolina doing collaborative research work with Prof. Yan Tong with support from an international faculty exchange program.

Duke Energy Information Session

Tuesday, September 16, 2014 - 06:00 pm
SWGN 2A15
ACM is hosting Jacob Young to hold an information session for Duke Energy next Tuesday evening, September 16th in 2A15 at 6pm. Duke Energy will be providing pizza or some other food and drinks that night. For more information, join the ACM @ USC group.

Cyber Security Club Meeting

Thursday, September 4, 2014 - 06:00 pm
Swearingen 1C03 (Amoco Hall)

Come join us this Thursday (9/4/2014) at 6pm for the first meeting of the new Cyber Security Club. It will be held in Swearingen 1C03 (Amoco Hall). We will present the club constitution to the membership, discuss upcoming events, and end with a showing of an amusing (but technical) 30 minute presentation from Defcon 22, entitled "Weaponizing Your Pets" by Gene Bransfield. Anyone who is interested in cyber security is welcome. We have an e-mail distribution group. To be added, please email ronni@cec.sc.edu. Thanks, Ronni Wilkinson Information Technology Services College of Engineering and Computing University of South Carolina

Lost in the Middle Kingdom: Teaching New Languages Using Serious Games and Language Learning Methodologies

Wednesday, July 9, 2014 - 11:30 am
Swearingen (Dean’s Conference Room)
MASTER’S DEFENSE Renaldo J. Doe Lost in the Middle Kingdom is a serious video game for language learning. Our game utilizes several language learning methodologies including second language acquisition theory, content-based instruction, and task-based language teaching. We analyze previous language learning games and their drawbacks in order to create a more effective experience. Lost in the Middle Kingdom seeks to balance language learning with fun and intuitive gameplay in order to deliver a form of interactive media that is accepted by both the gaming and research communities. Our test data illustrates the strengths and weaknesses of our game and how future improvements can bolster its effectiveness.

Using Genetic Algorithm to solve Median Problem and Phylogenetic Inference

Tuesday, July 8, 2014 - 10:00 am
Swearingen (3A75)
DISSERTATION DEFENSE Nan Gao Abstract Genome rearrangement analysis has attracted a lot of attentions in phylogenetic computation and comparative genomics. Solving the median problems based on various distance definitions has been a focus as it provides the building blocks for maximum parsimony analysis of phylogeny and ancestral genomes. The Median Problem (MP) has been proved to be NP-hard and although there are several exact or heuristic algorithms available, these methods all are difficulty to compute distant three genomes containing high evolution events. Such as current approaches, MGR and GRAPPA, are restricted on small collections of genomes and low-resolution gene order data of a few hundred rearrangement events. In my work, we focus on heuristic algorithms which will combine genomic sorting algorithm with genetic algorithm (GA) to produce new methods and directions for whole-genome median solver, ancestor inference and phylogeny reconstruction. In equal median problem, we propose a DCJ sorting operation based genetic algorithms measurements, called GA-DCJ. Following classic genetic algorithm frame, we develop our algorithms for every procedure and substitute for each traditional genetic algorithm procedure. The final results of our GA-based algorithm are optimal median genome(s) and its median score. In limited time and space, especially in large scale and distant datasets, our algorithm get better results compared with GRAPPA Extending the ideas of equal genome median solver, we develop another genetic algorithm based solver, GaDCJ-Indel, which can solve unequal genomes median problem (without duplication). In DCJ-Indel model, one of the key steps is still sorting operation. The difference with equal genomes median is there are two sorting directions: minimal DCJ operation path or minimal indel operation path. Following different sorting path, in each step scenario, we can get various genome structures to fulfill our population pool. Besides that, we adopt adaptive surcharge-triangle inequality instead of classic triangle inequality in our fitness function in order to fit unequal genome restrictions and get more efficient results. Our experiments results show that GaDCJ-Indel method not only can converge to accurate median score, but also can infer ancestors that are very close to the true ancestors. An important application of genome rearrangement analysis is to infer ancestral genomes, which is valuable for identifying patterns of evolution and for modeling the evolutionary processes. However, computing ancestral genomes is very difficult and we have to rely on heuristic methods that have various limitations. We propose a GA-Tree algorithm which adapts meta-population, co-evolution and repopulation pool methods In this paper, we describe and illuminate the first genetic algorithm for ancestor inference step by step, which uses fitness scores designed to consider coevolution and uses sorting-based methods to initialize and evolve populations. Our extensive experiments show that compared with other existing tools, our method is accurate and can infer ancestors that are much closer to true ancestors.

Document Analysis Techniques for Handwritten Text Segmentation, Document Image Rectification and Digital Collation

Thursday, July 3, 2014 - 11:00 am
Swearingen 3A75
DISSERTATION DEFENSE Department of Computer Science and Engineering University of South Carolina Dhaval Salvi Abstract Document image analysis comprises all the algorithms and techniques that are utilized to convert an image of a document to a computer readable description. In this work we focus on three such techniques, namely (1) Handwritten text segmentation (2) Document image rectification and (3) Digital Collation.Offline handwritten text recognition is a very challenging problem. Aside from the large variation of different handwriting styles, neighboring characters within a word are usually connected, and we may need to segment a word into individual characters for accurate character recognition. Many existing methods achieve text segmentation by evaluating the local stroke geometry and imposing constraints on the size of each resulting character, such as the character width, height and aspect ratio. These constraints are well suited for printed texts, but may not hold for handwritten texts. Other methods apply holistic approach by using a set of lexicons to guide and correct the segmentation and recognition. This approach may fail when the lexicon domain is insufficient. In the first part of this work, we present a new global non-holistic method for handwritten text segmentation, which does not make any limiting assumptions on the character size and the number of characters in a word. Digitization of document images using OCR based systems is adversely affected if the image of the document contains distortion (warping). Often, costly and precisely calibrated special hardware such as stereo cameras, laser scanners, etc. are used to infer the 3D model of the distorted image which is used to remove the distortion. Recent methods focus on creating a 3D shape model based on 2D distortion information obtained from the document image. The performance of these methods is highly dependent on estimating an accurate 2D distortion grid. These methods often affix the 2D distortion grid lines to the text line, and as such, may suffer in the presence of unreliable textual cues due to preprocessing steps such as binarization. In the domain of printed document images, the white space between the text lines carries as much information about the 2D distortion as the text lines themselves. Based on this intuitive idea, in the second part of our work we build a 2D distortion grid from white space lines, which can be used to rectify a printed document image by a dewarping algorithm.Collation of texts and images is an indispensable but labor-intensive step in the study of print materials. It is an often used methodology by textual scholars when the underlying manuscript of the text is nonexistent. Various methods and machines have been designed to assist in this labor, but it remains both expensive and time-consuming, requiring travel to distant repositories for the painstaking visual examination of multiple original copies. Efforts to digitize collation have so far depended on first transcribing the texts to be compared, introducing into the process a layer not only of labor and expense but also of potential error. Digital collation will instead automate the first stages of collation directly from the document images of the original texts, dramatically speeding the process of comparison. We describe such a novel framework for digital collation in the third part of this work.