Please use this identifier to cite or link to this item: http://archive.cmb.ac.lk:8080/xmlui/handle/70130/100
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPremaratne, S.C.-
dc.contributor.authorKarunarathne, D.D.-
dc.contributor.authorWikramanayake, G.N.-
dc.contributor.authorHewagamage, K.P.-
dc.contributor.authorDias, G.K.A.-
dc.date.accessioned2011-10-04T08:41:22Z-
dc.date.available2011-10-04T08:41:22Z-
dc.date.issued2004-
dc.identifier.urihttp://archive.cmb.ac.lk:8080/xmlui/handle/70130/100-
dc.description.abstractUse of video clips for e-learning is very limited due to the high usage of band width. The ability to select and retrieve relevant video clips using semantics addresses this problem. This paper presents a Profile based Feature Identification system for multimedia database systems which is designed to support the use of video clips for elearning. This system is capable of storing educational video clips with their semantics and retrieving required video clip segments efficiently on their semantics. The system creates profiles of presenters appearing in the video clips based on their facial features and uses these profiles to partition similar video clips into logical meaningful segments. The face recognition algorithm used by the system is based on the Principal Components Analysis (PCA) approach. However PCA algorithm has been modified to cope with the face recognition in video key frames. Several improvements have been proposed to increase the face recognition rate and the overall performance of the systemen_US
dc.language.isoenen_US
dc.publisherColombo: Infotel Lanka Societyen_US
dc.subjectMultimedia Databasesen_US
dc.subjectVideo Segmentationen_US
dc.subjecte-Learningen_US
dc.subjectFaceen_US
dc.titleProfile Based Video Segmentation System to Support e-Learningen_US
dc.typeResearch Abstracten_US
Appears in Collections:University of Colombo School of Computing

Files in This Item:
File Description SizeFormat 
abstract5.txt1.07 kBTextView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.