Volume-14 ~ Issue-3
- Citation
- Abstract
- Reference
- Full PDF
Abstract: This paper deals with the comparisons of various filters such as fuzzy based filters and discrete wavelet based filters and continuous wavelet based filters, and adaptive median filters based on homogeneity level information .One of the difficult tasks in the image processing is the removal of impulse noise such as random valued impulse noise and salt and pepper noise .Different type of noise removal filters can be implemented based on current technology. In this paper the comparison of the filter efficiency can be done by the help of the factors such as MSE,PSNR .The result comparisons of different filters are implemented by the help of MATHLAB Keywords: Adaptive median filter, continuous wavelet filter. discrete wavelet filters, fuzzy based filter, salt and pepper noise,
[1] J. Astola and P. Kuosmanen, Fundamentals of Nonlinear Digital Filtering. Boca Raton, CRC, 1997.
[2] T. Chen and H. R. Wu, ―Space variant median filters for the restoration of impulse noise corrupted images,‖ IEEE Transactions on Circuits and Systems II, 48 (2001), pp. 784–789.
[3] R. C. Gonzalez and R. E. Woods, Digital Image Processing Second Edition, Prentice Hall, 2001; and Book Errata Sheet (July 31, 2003), http://www.imageprocessingbook.com/downloads/errata sheet.htm.
[4] K.P. Soman and K.I. Ramachandran, ―Insight into Wavelets from Theory to Practice‖.
[5] K. K. V. Toh, H. Ibrahim, and M. N. Mahyuddin,―Salt-and-pepper noise detection and reduction using fuzzy switching median filter,‖ IEEE Trans. ConsumerElectron., vol. 54, no. 4, pp. 1956–1961, Nov. 2008.
[6] Farzam Farbiz, Mohammad Bager Menhaj, Seyed A. Motamedi, and Martin T. Hagan, ―A new Fuzzy Logic Filter for image Enhancement‖ IEEE Transactions on Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 30, No. 1, February 2000
[7] Robert D. Nowak, ―Wavelet Based Rician Noise Removal‖, IEEE Transactions on Image Processing,vol. 8, no. 10, pp.1408, October 1999.
[8] Geoffrine Judith.M.C1 and N.Kumarasabapathy, "STUDY AND ANALYSIS OF IMPULSE NOISE REDUCTION FILTERS", Signal & Image Processing : An International Journal(SIPIJ) Vol.2, No.1, pp. 82-92,March 2011
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Forest fires have a detrimental impact on economy and environment. The rapid distribution of the fire could cause many causalities and a lot of effort is required to control. To overcome this problem it is highly important to detect forest fire before it spread its wing on the surroundings. Till to date, many of the approaches like smoke velocity distribution, usage of sensors, sounding systems and usage of thermal images have been applied in forest surveillance and found to be ineffective. In this paper, we present an improved system for identification of forest fire by using land surface temperature satellite imagery. From these images an analysis is carried out to identify the mean wavelengths of abnormal temperature distribution when compared to the surroundings on a small region and the mean wavelength for forest fires exceeds 10.14 from the experimentation. If the mean wavelength is beyond 10.14, it is treated as no forest fire. This approach uses k-mean clustering and haar wavelet, resulting an average accuracy rate of 89.5 %.
Keywords: Forest fire, Haar wavele, Image processing, k-means clustering, Surface temperature satellite imagery, Temperature scale.
[1] Jerome Vicente and Philippe Guillemant, "An image processing technique for automatically detecting forest fire," International Journal of Thermal Sciences, 2002, pp.1113–1120.
[2] L. Giglio, J. Kendall and C.O. Justice, "Evaluation of global fire detection algorithms using simulated AVHRR data," International Journal of Remote Sensing, 1999, pp. 1947–1986.
[3] T.M. Lillesand and R.W. Kiefer, "Remote Sensing and Image Interpretation," 4th Edition, Wiley, New York, 2000.
[4] T. Hame and Y. Rauste, "Multitemporal satellite data in forest mapping and fire monitoring," EARSeL Advanced Remote Sensing, 1995, pp.93–101.
[5] M.D Flannigan and T.H. Vonder Haar, "Forest Fire Monitoring Using Noaa Satellite Avhrr," Candian Journal Of Forest Research., 1986, pp.975-982.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: A Mobile ad-hoc network (MANET) is a network, self-configuring, proficient of self-directed functioning, quickly deployable and operates without infrastructure. MANET operates without any centralized administration. The nodes are self configuring, independent, quickly deployable. Nodes are movable since topology is dynamic and they have restricted computing resources. We know that routing protocols make an important role for improving Quality of Service (QoS) in Mobile Ad hoc Network. The reactive AODV routing faces problems like long route, time delay, mobility and many other while routing. In this work the quality of AODV routing protocol has been improved to enhance the routing capability. In this paper the performance of normal AODV routing protocol are improves on the basis of QoS. Here we consider the particular TTL Value and dynamic threshold value for established the connection in long route and also measure the varying queue length technique by that if the node buffer size is full then no packet is drop form queue, it means the size of queue is varying according to data. The dynamic TTL value established the connection with long route receiver and the varying queue minimizes the packet loss. The quality of Routing protocols should integrate QoS metrics in route finding and maintenance, to support end-to-end QoS. The performance of improves AODV protocol is measures on the basis of performance metrics.
Keywords: MANET, QoS, TTL, AODV, Queue length.
[1] C.R.Lin and J.S. Liu., "QoS routing in ad hoc wireless networks", IEEE J.Select.Areas Commun.,vol.17, pp.1488-1505, 1999.
[2] C.Perkins, "Ad-hoc On-Demand Distance Vector (AODV) routing", RFC3561[S], 2003.
[3] D.B.Johnson, D.A.Maltz, Y.C.Hu, "The Dynamic Source Routing protocol for mobile ad hoc networks",Internet Draft, 2004.
[4] J.Hong, "Efficient on-demand routing for mobile ad hoc wireless access networks", IEEE journal on selected Areas in Communications 22(2004), 11-35.
[5] Luo Junhai, Xue Lie and Ye Danxia "Research on multicast routing protocols for mobile ad hoc networks" Computer Networks52(2008), 988-997.
[6] J. Punde, N. Pissinou, and K. Makki, "On quality of service routing in ad-hoc networks," in Proc. of LCN'03, pp. 276–278, Oct. 2003.
[7] N. Wang and J. Chen, "A Stable On-Demand Routing Protocol for Mobile Ad Hoc Networks withWeight-Based Strategy," IEEE PDCAT'06, pp. 166–169, Dec. 2006.
[8] J. Lian., L. Li., and X. Zhu., "A Multi-Constraint QoS Routing Protocol with Route Request Selection Based on Mobile Predictibg in MANET," IEEE CISW'07, pp. 342–345, Dec. 2007.
[9] D. Kim., J. Garcia-Luna-Aceves, K. Obraczka, J.-C. Cano, and P. Manzoni, "Routing mechanisms for mobile ad hoc networks based on the energy drain rate," IEEE Transactions on Mobile Computing, vol. 2, no. 2, pp. 161–173, 2003.
[10] Omkumar. S., Rajalakshmi. S." Analysis of Quality of Service using Distributed Coordination Function in Aodv", European Journal of Scientific Research ISSN 1450-216X Vol.58 No.1, pp.6-10, 2011
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Web Mining Research Issues and Future Directions – A Survey |
Country | : | India |
Authors | : | D. Jayalatchumy, Dr. P. Thambidurai |
: | 10.9790/0661-1432027 |
Abstract: This paper is a work on survey on the existing techniques of web mining and the issues related to it. The World Wide Web acts as an interactive and popular way to transfer information. Due to the enormous and diverse information on the web, the users cannot make use of the information very effectively and easily. Data mining concentrates on non trivial extraction of implicit previously unknown and potential useful information from the very large amount of data. Web mining is an application of data mining which has become an important area of research due to vast amount of World Wide Web services in recent years. The aim of this paper is to provide the past and current techniques in Web Mining. This paper also reports the summary of various techniques of web mining approached from the following angles like Feature Extraction, Transformation and Representation and Data Mining Techniques in various application domains. The survey on data mining technique is made with respect to Clustering, Classification, Sequence Pattern Mining, Association Rule Mining and Visualization. The research work done by different users depicting the pros and cons are discussed. It also gives the overview of development in research of web mining and some important research issues related to it. Keywords: Association rule mining, Data pre-processing, Video mining, Audio mining, Text mining and Image mining.
[1] Ms. Dipa Dixit, Fr.CRIT, Vashi, M Kiruthika," PREPROCESSING OF WEB LOGS", (IJCSE) International Journal on Computer Science And Engineering, Vol. 02, No. 07, 2010, 2447-2452.
[2] Dr. Sohail Asghar, Dr. Nayyer Masood," Web Usage Mining: A Survey On Preprocessing Of Web Log File Tasawar Hussain", 978-1-4244-8003-6/10@2010.
[3] Theint Theint Aye "Web Log Cleaning For Mining Of Web Usage Patterns".
[4] S. K. Pani, et.al L "Web Usage Mining: A Survey On Pattern Extraction From Web Logs", International Journal Of Instrumentation, Control & Automation (IJICA), Volume 1, Issue 1, 2011.
[5] Chidansh Amitkumar Bhatt · Mohan S. Kankanhalli, "Multimedia Data Mining: State Of The Art And Challenges" Published Online: 16 November 2010© Springer Science+Business Media, LLC 2010.
[6] Margaret H. Dunham, Yongqiao Xiao Le Gruenwald, Zahid Hossain," A SURVEY OF ASSOCIATION RULES Web Usage Mining". [
7] Brijendra Singh1, Hemant Kumar Singh2,"WEB DATA MINING RESEARCH: A SURVEY", 978-1-4244-5967-4/10/$26.00 ©2010 IEEE.
[8] Rajni Pamnani, Pramila Chawan 1 Qingtian Han, Xiaoyan Gao, "Web Usage Mining: A Research Area In Web Mining".
[9] Wenguo Wu, "Study On Web Mining Algorithm Based On Usage Mining", Computer- Aided Industrial Design And Conceptual Design, 2008. CAID/CD 2008. 9th International Conference On 22-25 Nov.2008.
[10] R. Kosala, H. Blockeel. "Web Mining Research: A Survey," In SIGKDD Explorations, ACM Press, 2(1): 2000, Pp.1-15.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Phishing is a new type of network attack where the attacker creates a replica of an existing Web page to fool users (e.g., by using specially designed e-mails or instant messages) into submitting personal, financial, or password data to what they think is their service provides' Web site. In this research paper, we proposed a new end-host based anti-phishing algorithm, which we call Link Guard, by utilizing the generic characteristics of the hyperlinks in phishing attacks. These characteristics are derived by analyzing the phishing data archive provided by the Anti-Phishing Working Group (APWG). Because it is based on the generic characteristics of phishing attacks, Link Guard can detect not only known but also unknown phishing attacks. We have implemented Link Guard in Windows XP. Our experiments verified that Link Guard is effective to detect and prevent both known and unknown phishing attacks with minimal false negatives. Link Guard successfully detects 195 out of the 203 phishing attacks. Our experiments also showed that Link Guard is light weighted and can detect and prevent phishing attacks in real time.
Index Terms: Hyperlink, Link Guard algorithm. Network security, Phishing attacks.
[1] I. Androutsopoulos, J. Koutsias, K.V. Chandrinos, and C.D. Spyropoulos.An Experimental Comparison of Naive Bayesian and Keyword-Based Anti-Spam Filtering with Encrypted Personal E-mail Message.In Proc. SIGIR 2000, 2000.
[2] The Anti-phishing working group. http://www.antiphishing.org/.
[3] Neil Chou, Robert Ledesma, Yuka Teraguchi, Dan Boneh, and John C.Mitchell. Client-side defense against web-based identity theft. In Proc.NDSS 2004, 2004.
[4] Cynthia Dwork, Andrew Goldberg, and Moni Naor. On Memory-BoundFunctions for Fighting Spam. In Proc. Crypto 2003, 2003. [5] EarthLink.ScamBlocker.http://www.earthlink.net/software/free/toolbar/.
[6] David Geer. Security Technologies Go Phishing. IEEE Computer, 38(6):18–21, 2005.
[7] John Leyden. Trusted search software labels fraud siteas ‟safe‟.http://www.theregister.co.uk/2005/09/27/untrusted search/. [
8] Microsoft. Sender ID Framework. http://www.microsoft.com/mscorp/safety/technologies/senderid/default.mspx.
[9] Netcraft. Netcraft toolbar. http://toolbar.netcraft.com/.
[10] PhishGuard.com. Protect Against Internet Phishing Scams .http://www.phishguard.com/.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | New and Unconventional Techniques in Pictorial Steganography and Steganalysis |
Country | : | India |
Authors | : | Vikas Kothari, Dinesh Goyal |
: | 10.9790/0661-1433741 |
Abstract: Steganography involves transferring data through a given channel in such a way that the communication channel itself is hidden from all involved parties. It is a form of security through obscurity; this technique protects both messages and stations involved in communication. In the most modern form of digital steganography, electronic communication may include complex steganographic lines of code inside transport layer which accomplishes the aforementioned task of protecting data and anonymity of participating stations. Steganography is much confused with the term cryptography former is fundamentally different in the sense that the message is sent secretly without attracting any attention of third party intruders by protecting anonymity of involved channels, nodes and stations. Keywords: Steganography, Security and Protection, Cryptography, Data Hiding, Data Encryption
[1] M. Kharrazi, H. Sencar, and N. Memon, "Image Steganography: Concepts and Practice," Lecture Note Series, Institute for Mathematical Sciences, National University of Singapore, 2004.
[2] I. Cox, T. Kalker, G. Pakura, and M. Scheel, "Information transmission and steganography," Lecture Notes in Computer Science, vol. 3710, p. 15, 2005.
[3] C. Shannon, "Communication Theory of Secrecy Systems," Bell System technical Journal, vol. 28, pp. 656-715, 1954.
[4] G. J. Simmons, "The prisoners' problem and the subliminal channel," in Advances in Cryptology: Proceedings of CRYPTO'83. Plenum Pub Corp, 1984, pp. 51-67.
[5] R. Givner-Forbes, "Steganography: Information Technology in the Service of Jihad," The international Centre for Political Violence and Terrorism Research, March 2007. [Online]. Available: www.pvtr.org
[6] R. Anderson, "Stretching the limits of steganography," Lecture Notes in Computer Science, vol. 1174, pp. 39-48, 1996.
[7] Wikipedia The Free Encyclopedia. [Online]. Available: http://en.wikipedia.org/wiki/Printer_steganography
8] G. Cancelli and M. Barni, "MPSteg-color: A new steganographic technique for color images," Information Hiding: 9th International Workshop, Ih 2007, Saint Malo, France, June 11-13, vol. 4567, pp. 1-15, 2007.
[9] A. Westfeld, "F5-a steganographic algorithm: High capacity despite better steganalysis," in Information Hiding: 4th International Workshop, IH 2001, Pittsburgh, PA, USA, April 25-27, 2001: Proceedings. Springer, 2001, p. 289.
[10] M. Goljan, J. Fridrich, and T. Holotyak, "New blind steganalysis and its implications," Proceedings of SPIE, vol. 6072, pp. 1-13, 2006.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: The development of techniques for computing PageRank efficiently for Web-scale graphs is important for a number of reasons. For Web graphs containing a billion nodes, computing a PageRank vector can take several days. Computing PageRank quickly is important to reduce the lag time from when a new crawl is completed to when that crawl can be made available for searching. Ranking in Google algorithm is currently using the power method to get the value based on the convergence of eigenvalues. According to the research conducted by Oranova, the Google algorithm that uses the power method interspersed with rayleigh quotient generates a significant acceleration. Therefore, the objectives of this research were (1) to analyze dataset taken from the previous research which will be adapted to this research, (2) to develope an algorithm to compute pagerank and speed up computation using combination of quotient rayleigh with extrapolation method, and (3) to analyze the performance of the algorithm. This research was conducted on 36 datasets taken from Stanford and Toronto Universities. As a result, Rayleigh quotient inside the extrapolation method can speed up the computation speed of PageRank.
Keywords: pagerank, extrapolation method, rayleigh quotient, speed of computation
[1]. Bala. 2011. An Overview of Efficient Computation of PageRank.
[2]. Eiermann. 2009. The Numerical Solution of Eigen Problems. TU Bergadademie Freiberg.
[3]. Garfield. 1979. Citation Indexing: Its Theory and Application in Science, Technology,and Humanities. New York: John Wiley.
[4]. Kamvar. 2003. Extrapolation method for Accelerating PageRank Computations. Stanford University
[5]. Langvilleand, Meyer. 2005. Deeper Inside PageRank. Internet Mathematics, Vol. 1(3):335-380.
[6]. Mitra. 2012. CA Based Moore Filter in SEO. IJARCSSE
[7]. Oranova, Arifin. 2009. Implementing of Quotien Rayleigh in Power Method to Accelerate PageRank Computation. Medwell Journal Vise, David and Malseed , Mark. 2005. The Google Story. p. 37. ISBN ISBN 0-553-80457-X.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Do the security challenges posed by virtualisation make it a non-starter for your sensitive business applications? There is much debate about cloud computing, which promises to deliver utility-based virtual computing to the front door of every company. Some are dismissing it as marketing hype – but if it delivers the benefits it claims, it has the potential to transform enterprise computing. So what are the challenges – and what are the opportunities posed by this development?
Keywords: Cloud computing, Virtual Machine, Security Issues, Service Management, IDS/IPS.
[1] D. Patterson, ―The trouble with multi-core,‖ IEEE Spectrum, vol. 47, no. 7, pp. 28–53, 2010.
[2] J. V. Neumann, ―Theory of natural and artificial automata,‖ in Papers of John Von Neuman on Computers and ComputerTheory, W. Aspray and A. W. Burks, Eds., vol. 12 of Charles Babbage Institute Reprint, Series for the History of Computing,pp. 408–474, The MIT Press, Cambridge, Mass, USA, 1986.
[3] S. Balasubramaniam, K. Leibnitz, P. Lio, D. Botvich, and M. Murata, ―Biological principles for future Internet architecturedesign,‖ IEEE Communications Magazine, vol. 49, no. 7, pp. 44–52, 2011.
[4] R. Mikkilineni, ―Is the Network-centric Computing Paradigm for Multicore, the Next Big Thing?‖ Convergence of Distributed Clouds, Grids and Their Management,2010,http://www.computingclouds.wordpress.com/.
[5] G. Morana and R. Mikkilineni, ―Scaling and self-repair of Linux based services using a novel distributed computingmodel exploiting parallelism,‖ in Proceedings of the 20th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE '11), pp. 98–103, June 2011.
[6] R. Mikkilineni and I. Seyler, ―Parallax—a new operating system for scalable, distributed, and parallel computing,‖in Proceedings of the IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum(IPDPSW '11), pp. 976–983, 2011.
[7] R. Mikkilineni and I. Seyler, ―Parallax—a new operating system prototype demonstrating service scaling and serviceself-repair in multi-core servers,‖ in Proceedings of the 20th IEEE InternationalWorkshops on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE '11), pp. 104– 109, 2011.
[8] R. Mikkilineni, Designing a New Class of Distributed Systems, Springer, New York, NY, USA, 2011.
[9] R. Buyya, T. Cortes, and H. Jin, ―Single system image,‖ International Journal of High Performance Computing Applications,vol. 15, no. 2, pp. 124–135, 2001.
[10] http://www.seamicro.com/.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Software effort estimation through clustering techniques of RBFN network |
Country | : | India |
Authors | : | Usha Gupta, Manoj Kumar |
: | 10.9790/0661-1435862 |
Abstract: Now a day's software cost/effort estimation is a very complex job to do. Several estimation techniques have been developed in this regard. This assessment of parameters like, time, cost, and number of staff required sequentially which in turn is to be done at an early stage. Constructive Cost model which is also known as COCOMO model was one of the best model to estimate the cost and time in person month of a software project. The estimation of cost and time supports the project planning and tracking, as well as also controls the expenses of software development process. The accurate effort estimation will lead to improve the project success rate. In this paper, the main focus is on finding the accuracy of estimation of effort/cost of software using radial basis function neural network (RBFN) incorporating ANN-COCOMO II which can be used for functional approximation. This model estimates the total effort of software development according to the characteristics of COCOMO-II along with radial basis clustering techniques. The RBFN network is much faster than other network because the learning process in this network has two stages and both stages can be made efficient by appropriate learning algorithms. The RBFN network uses COCOMO-II dataset for training.
Keywords:- COCOMO-2, RBFN, Clustering techniques, K-means, APC-III.
[[1] A New Approach For Estimating Software Effort. Using RBFN Network. Ch. Satyananda Reddy, P. Sankara Rao, KVSVN Raju, V. Valli Kumari, IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.7, July 2008.
[2] A new approach of Software Effort Estimation Using Radial Basis. Sriman Srichandan Balasore College of Engineering and Technology,Balasore,Odisha ,India, International Journal on Advanced Computer Theory and Engineering (IJACTE), ISSN (Print) : 2319 – 2526, Volume-1, Issue-1, 2012, pp. 113-120.
[3] M. Jorgensen, and M. Shepperd, "A Systematic Review of Software Development Cost Estimation Studies", IEEE Transactions on Software Engineering, Vol. 33, No. 1, 2007, pp. 33-53.
[4] K. Molokken, and M. Jorgensen, "A Review of Surveys on Software Effort Estimation", in International Symposium on Empirical Software Engineering, 2003, pp. 223-231.
[5] G. R. Finnie, and G. Witting, and J.-M. Desharnais, "A Comparison of Software Effort Estimation Techniques: Using Function Points with Neural Networks, Case-Based Reasoning and Regression Models", Systems and Software, Vol. 39, No. 3, 1997, pp. 281-289.
[6] A. Idri, and A. Abran, and S. Mbarki, "An Experiment on the Design of Radial Basis Function Neural Networks for Software Cost Estimation", in 2nd IEEE International Conference on Information and Communication Technologies: from Theory to Applications, 2006, Vol. 1, pp. 230-235.
[7] A. Idri, and A. Zahi, and E. Mendes, and A. Zakrani, "Software Cost Estimation Models Using Radial Basis Function Neural Networks", in International Conference on Software Process and Product Measurement, 2007, pp. 21-31.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Data Analysis and Result Computation (DARC) Algorithm for Tertiary Institutions |
Country | : | Nigeria |
Authors | : | Abel U. Osagie, Abu Mallam |
: | 10.9790/0661-1436369 |
Abstract: Tertiary institutions (e.g., Universities, Polytechnics and Colleges of Education) are expected to manage and preserve data of both present and past students. These data include general information about students as well as examination results at any or all stages of studentship. Analysis of student data can be used to inform solutions to a wide variety of educational challenges like exploring group differences, gender representation, ethnic representation, and growth over time. However, data management and examination result computation of large population of students can be rigorous time-consuming and prone to errors. Information management technology offers a comprehensive portfolio of students' academic record to be available in real-time to students and professionals. That a comprehensive solution for use in many tertiary institutions is not yet common presents a unique opportunity for researchers to help shape the future of this technology. We introduce a Fortran algorithm for Data Analysis and Result Computation (DARC). The algorithm analyzes student data as well as computes examination results of all students in a department at once. DARC reads particular spreadsheet formats of students' information data, semester scores and previous academic records, after which it outputs, among others, analysis of students information, comprehensive result format for students showing grade point averages(GPA) and updated cumulative grade point averages (CGPA), summary of academic records including log of carry over courses, log of carry over courses that have been cleared, log of outstanding courses and log of outstanding courses that have been cleared. Using synthetic record of some students, we validate the reliability of the algorithm by conducting accuracy tests. The algorithm is flexible and can be suitably modified to manage and preserve academic and general records of both present and past students in any tertiary institution.
Keywords: Fortran Algorithm, Student records, Result Computation and Analysis, Reliable.
[1]. American Association of School Administrators (2002). "Using data to improve schools: What‟s working". Available: www.aasa.org/cas/UsingDataToImproveSchools.pdf.
[2]. Chrispeels, J.H. (1992). "Purposeful restructuring: Creating a climate of learning and achievement in elementary schools." London: Falmer Press.
[3]. Ed, Akin. (2001). "Object oriented programming via fortran 90/95", Rice University, Houston, Texas
[4]. Emmanuel, B. & Choji, D. N. "A software application for colleges of education result processing", Journal of Information Engineering and Applications, Volume 2, No. 11 (2012).
[5]. Feldman, J., & Tung, R. (2001). "Using data based inquiry and decision-making to improve instruction", ERS Spectrum 19(3), 10-19.
[6]. Fullan, M.G., & Miles, M.M. (1992). "Getting reform right: What works and what doesn‟t", Phi Delta Kappan, 73 (10), 744-752.
[7]. Ian D. Chivers & Jane Sleightholme, (2006), "Introduction to programming with fortran", Springer-Verlag London Limited.
[8]. Moses E. Ekpenyong.(2008). "A Real-Time IKBS for students results computation", International Journal of Physical Sciences (Ultra Scientist of Physical Sciences) Volume 20, Number 3(M), September – December, 2008. Available: http://www.mySQL.com, (July 22,2012)
[9]. Okonigene, R.E., Ighalo, G.I., Ogbeifun, E. (2008). "Developed personal record software", The Pacific Journal of Science and Technology .9(2):407-412. Available: http://www.akamaiuniversity.us/PJST.htm (July 15,2012).
[10]. Stringfield, S., Wayman, J.C., & Yakimowski, M. (2003, March). "Scaling up data use in classrooms, schools, and districts", Paper presented at the Scaling Up Success conference, Harvard Graduate School of Education, Boston MA.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Automatic Learning Image Objects via Incremental Model |
Country | : | India |
Authors | : | L. Ramadevi, P. Nikhila, D. Srivalli |
: | 10.9790/0661-1437078 |
Abstract: A well-built dataset is a necessary starting point for advanced computer vision research. It plays a crucial role in evaluation and provides a continuous challenge to state of the art algorithms. Dataset collection is, however, a tedious and time-consuming task. This paper presents a novel automatic dataset collecting and model learning approach that uses object recognition techniques in an incremental method. The goal of this work is to use the tremendous resources of the web to learn robust object category models in order to detect and search for objects in real-world cluttered scenes. It mimics the human learning process of iteratively accumulating model knowledge and image examples. We adapt a non-parametric graphical model and propose an incremental learning framework. Our algorithm is capable of automatically collecting much larger object category datasets for 22 randomly selected classes from the Caltech 101 dataset. Furthermore, we offer not only more images in each object category dataset, but also a robust object model and meaningful image annotation. Our experiments show that OPTIMOL is capable of collecting image datasets that are superior to Caltech 101 and Label Me.
Keywords: Content based image retrieval, Incremental model learning, learning to detect object.
[1] S. Agarwal, A. Awan, and D. Roth. Learning to detect objects in images via a sparse, part-based representation. PAMI, 26(11):1475-1490, 2004.
[2] T. Berg and D. Forsyth. Animals on the web. In Proc. Computer Vision and Pattern Recognition, 2006. [3] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022, 2003.
[4] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. In Workshop on Generative-Model Based Vision, 2004.
[5] L. Fei-Fei, R. Fergus, and P. Perona. One-Shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine In telligence, 2006.
[6] L. Fei-Fei and P. Perona. A Bayesian hierarchy model for learning natural scene categories. CVPR, 2005.
[7] H. Feng and T. Chua. A bootstrapping approach to annotating large image collection. Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pages 55-62, 2003.
[8] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning Object Categories from Google‟s Image Search. Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, 2, 2005.
[9] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In Proc. Computer Vision and Pattern Recognition,(2003), pages 264-271.
[10] R. Fergus, P. Perona, and A. Zisserman. A visual category filter for Google images. In Proc. 8th European Conf. on Computer Vision, 2004.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Face Recognition System under Varying Lighting Conditions |
Country | : | India |
Authors | : | P. Kalaiselvi, S. Nithya |
: | 10.9790/0661-1437988 |
Abstract: Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. Other recognition systems don't nullify most of the lighting variations. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching and kernel based feature extraction and multiple feature fusion. We present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition. We introduce Local Ternary Pattern (LTP), a generalization of the Local Binary Pattern (LBP) local texture descriptor less sensitive to noise. We further increase robustness by introducing Phase Congruency. The resulting method provides a face verification rate of 88.1% at 0.1% false accept rate. Experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions. We simulate this project using MATLAB software.
Keywords: Recognition, Illumination normalization, Local texture based face representations, Local Binary Pattern, Local Ternary Pattern .
[1] Adini. Y, Moses. Y, and Ullman. S, (Jul. 1997) "Face recognition: The problem of compensating for changes in illumination direction," IEEE Trans.Pattern Anal. Mach. Intel., vol. 19, no. 7, pp. 721–732.
[2] Ahonen. T, Hadid. A and Pietikainen. M, (Dec. 2006) "Face description with local binary patterns: Application to face recognition," IEEE Trans.pattern Anal. Mach. Intel., vol. 28, no. 12, pp. 2037– 2041.
[3] Dalal. N and Trigg's N, (2005) "Histograms of oriented gradients for human detection," in Proc. CVPR, Washington, DC, pp. 886–893.
[4] Lee. K, Ho. J, and Kriegman. D, (May 2005) "Acquiring linear subspaces for face recognition under variable lighting," IEEE Trans. Pattern Anal.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Vector quantization is a compression technique which is used to compress the image data in the spatial domain. Since it is a lossy technique, so maintaining the image quality and the compression ratio is a difficult task. For this, the codebook which stores the image data should be optimally designed. In this paper, the K-means algorithm is used to optimize the codebook. We have generated the codebooks from LBG, KPE, KFCG and Random Selection Methods. It is found that the Mean Square Error reduces for every iteration and after a certain number of iterations it stops reducing because the optimal value is reached. We can say that the codebook is optimized at that point. The results show that the codebook obtained from KFCG Algorithm has a least Mean Square Error. This shows that KFCG codebook is more nearer to the optimal point and when applied with K-means algorithm gives the best optimization in comparison with other algorithms.
Keywords: Codebook Optimization, Euclidian Distance, K-means Algorithm, Mean Square Error, Vector Quantization
1] R. Navaneethakrishnan, "Study of Image Compression Techniques," International Journal of Scientific & Engineering Research, Vol. 3, No. 7, pp. 1-5, July 2012.
[2] G. Boopathy and S. Arockiasam, "Implementation of Vector Quantization for Image Compression - A Survey," Global Journal of Computer Science and Technology, Vol. 10, No. 3, pp. 22-28, April 2010.
[3] Carlos R.B. Azevedo, Esdras L. Bispo Junior, Tiago A. E. Ferreira, Francisco Madeiro, and Marcelo S. Alencar, "An Evolutionary Approach for Vector Quantization Codebook Optimization," Springer-Verlag Heidelberg, pp. 452-461, 2008.
[4] Pamela C. Cosman, Karen L. Oehler, Eve A. Riskin, and Robert M. Gray, "Using Vector Quantization for Image Processing," In Proc. Of The IEEE, Vol. 81, No. 9, pp. 1326-1341, September 1993.
[5] Tzu-Chuen Lu and Ching-Yun Chang, "A Survey of VQ Codebook Generation," Journal of Information Hiding and Multimedia Signal Processing, Vol. 1, No. 3, pp. 190-203, July 2010.
[6] H.B. Kekre and Tanuja K. Sarode, "Vector Quantized Codebook Optimization using K-Means," International Journal on Computer Science and Engineering, Vol. 1, No.3, pp. 283-290, 2009.
[7] S. Vimala, "Convergence Analysis of Codebook Generation Techniques for Vector Quantization using K-Means Clustering Technique," International Journal of Computer Applications, Vol. 21, No. 8, pp. 16-23, May 2011.
[8] J. B. MacQueen, "Some Methods for Classification and Analysis of Multivariate Observations", Proceedings of 5-th Berkeley symposium on Mathematical Statistics and Probability", Berkely, University of California Press, vol 1, pp. 281-297, 1967.
[9] Y. Linde, A. Buzo, and R. M. Gray, "An algorithm for vector quantizer design," IEEE Trans. Commun.', Vol. COM-28, No. 1, pp. 84-95, January 1980.
[10] H.B.Kekre and Tanuja K. Sarode, "Two-level Vector Quantization Method for Codebook Generation using Kekre's Proportionate Error Algorithm," International Journal of Image Processing, Vol. 4, No. 1, pp. 1-11, 2010.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Analytical Review on the Correlation between Ai and Neuroscience |
Country | : | India |
Authors | : | Roohi Sille, Gaurav Rajput, Aviral Sharma |
: | 10.9790/0661-14399103 |
Abstract: Neuroscience is the pragmatic study of brain anatomy and physiology. AI and neuroscience are typical related to the human brain's behavior. The alliance between artificial intelligence and neuroscience can produce an understanding of the mechanisms in the brain that generate human cognition. This paper discusses about the benefits that AI has got from the field of neuroscience. It basically deals with the learning, perception and reasoning. Neuroscience helps in understanding Natural Intelligence which correlates with the Artificial Intelligence. A bridge between AI and neuroscience is altercated.
Keywords: Neuroscience, Artificial Intelligence, Artificial Neural Network, Neuroethology, Hybrots.
[1] TsviAchler, Eyal Amir, Neuroscience and AI Share the Same Elegant Mathematical Trap.
[2] Steve M. Potter, ―What Can AI Get from Neuroscience?‖ M. Lungarella et al. (Eds.): 50 Years of AI, Festschrift, LNAI 4850, pp. 174–185, 2007. © Springer-Verlag Berlin Heidelberg 2007.
[3] Llinas, R. &Ribary, U. (1993) Coherent 40-Hz oscillation characterizes dream state in humans. Proceedings of the National Academy of Sciences USA 90(5):2078–81.
[4] Joanna J. Bryson, Modular Representations of Cognitive Phenomena in AI, Psychology and Neuroscience.
[5] Jason Dhabliwala Cognitive Psychology, Cognitive Neuroscience and Artificial Intelligence.
[6] P. Husbands, Philippides A., Computational Neuroscience for Advancing Artificial Intelligence: Models, Methods and Applications, Chapter 10, Published in the United States of America by Medical Information Science Reference (an imprint of IGI Global)
[7] Stephen Kelly and Alfons Schuster, ‗APPLICATION OF A FUZZY CONTROLLER ON A LEGO MINDSTORMS ROBOT'
[9] Nilsson, N. J. (Ed.). (1984). ShakeyThe Robot, Technical Note 323. AI Center, SRI International, Menlo Park CA.,Nolfi, S., &Floreano, D. (2000) Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines.Cambridge, MA: MIT Press.
[10] Brooks, R. A. (1986). A Robust Layered Control System for a Mobile Robot.IEEE Journal onRobotics and Automation, 2(1), 14–23. Brooks, R. A. (1989). A Robot that Walks; Emergent
- Citation
- Abstract
- Reference
- Full PDF
Abstract: One major challenge faced by Android users today is the security of the operating system especially during setup. The use of smartphones for communication, social networking, mobile banking and payment systems have all tripled and many have depended on it for their daily transactions.AndroidOS on smartphones is so popular today that it has beaten the most popular mobile operating systems, like RIM, iOS, Windows Mobile and even Symbian, which ruled the mobile market for more than a decade.This paper performspenetration testing of Android-based Smartphones using an application program designed to simplify port-scanning techniques for information gathering and vulnerability attack. In this paper, an attempt was made to test and analyze the security architecture of the Android operating system using the latest penetration testing and vulnerability tool based on Kali Linux. Three different Versions of Android, Version 2.3, 3.2 and 4.2 were simulated on a virtual machine. The result shows that although there is an improvement in the security stack of the different AndroidVersions but Version 4.2 is more secured than the others. This work is important for users and researchers who use their Android smartphones in a critical environment with hostile network traffic.
Keywords: Android, Linux, Penetration testing, Security, Smartphones
[1] N. Kumar, and M.E UI Haq, Penetration testing for Android Smartphones, masters thesis, Chalmers University of Technology, Goteborg, SW, 2011.
[2] F. Alisherov, and F. Sattarova, Methodology for Penetration Testing, International Journal of Grid and Distributed Computing, 2(2), 2009, 44-49.
[3] A. Johnson, K. Dempsey, R. Ross, S. Gupta and D. Bailey, Guide for Security-Focused configurationManagement of Information System, (SP800-42), Accessed on May, 2013.
[4] H. Sheikh, J. Cyril, and O. Tomas,An Analysis of the Robustness and Stability of the Network Stack in Symbian Based Smartphones Vol. No. 10 2009.
[5] N. Kumar, and M.E UI Haq, Penetration testing for Android Smartphones, masters thesis, Chalmers University of Technology, Goteborg, SW, 2011.
[6] Android operating system, (2013). Available from: <http://en.wikipedia.org/wiki/Android_(operating_system) >. [10 March, 2013] [7] K. Arto, Security Comparison of Mobile OSes, and Available from: < http://www.tml.tkk.fi/Opinnot/Tik-110.501/2000/papers/kettula.pdf >. [11 March 2013].
[8] AndroidVersion history, (2013). Available from: <http://en.wikipedia.org/wiki/Android_Version_history>. [15 March 2013]
[9] AndroidVersion, 2013, (2013), Available from: <http://developer.Android.com/guide/basics/what-is-Android.html>. [15 March 2013]
[10] AndroidVersion, 2013, (2013), Available from: <http://developer.Android.com/guide/basics/what-is-Android.html>. [15 March 2013]