Volume-12 ~ Issue-6
- Citation
- Abstract
- Reference
- Full PDF
Abstract: In mobile ad hoc networks (MANETs), the network topology changes frequently and unpredictably due to the arbitrary mobility of nodes. This feature leads to frequent path failures and route reconstructions, which causes an increase in the routing control overhead. The overhead of a route discovery cannot be neglected. Thus, it is imperative to reduce the overhead of route discovery in the design of routing protocols of MANETs. One of the fundamental challenges of MANETs is the design of dynamic routing protocols with good performance and less overhead. In a route discovery, broadcasting is a fundamental and effective data dissemination mechanism, where a mobile node blindly rebroadcasts the first received route request packets unless it has a route to the destination, and thus it causes the broadcast storm problem. This paper focuses on a probabilistic rebroadcast protocol based on neighbor coverage to reduce the routing overhead in MANETs.
Keywords - Mobile Ad Hoc Networks, Neighbor Coverage, and Network Connectivity, Probabilistic Rebroadcast, Routing Overhead, AODV.
[1] C. Perkins, E. Belding-Royer, and S. Das, "Ad hoc On-Demand Distance Vector (AODV) Routing," RFC 3561, 2003.
[2] D. Johnson, Y. Hu, and D. Maltz, "The Dynamic Source Routing Protocol for Mobile Ad hoc Networks (DSR) for IPv4," RFC 4728, 2007.
[3] H. AlAamri, M. Abolhasan, and T. Wysocki, "On Optimising Route Discovery in Absence of Previous Route Information in MANETs," Proc. of IEEE VTC 2009-Spring, pp. 1-5, 2009.
[4] X. Wu, H. R. Sadjadpour, and J. J. Garcia-Luna-Aceves, "Routing Overhead as A Function of Node Mobility: Modeling Framework and Implications on Proactive Routing," Proc. of IEEE MASS'07, pp. 1-9, 2007.
[5] S. Y. Ni, Y. C. Tseng, Y. S. Chen, and J. P. Sheu. "The Broadcast Storm Problem in a Mobile Ad hoc Network," Proc. of ACM/IEEE MobiCom'99, pp. 151-162, 1999.
[6] Mohammed, M. Ould-Khaoua, L.M. Mackenzie, C. Perkins, and J. D. Abdulai, "Probabilistic Counter-Based Route Discovery for Mobile Ad Hoc Networks," Proc. of WCMC'09, pp. 1335-1339, 2009.
[7] Williams and T. Camp, "Comparison of Broadcasting Techniques for Mobile Ad Hoc Networks," Proc. ACM MobiHoc'02, pp. 194-205, 2002.
[8] J. Kim, Q, Zhang, and D. P. Agrawal, "Probabilistic Broadcasting Based on Coverage Area and Neighbor Confirmation in Mobile Ad hoc Networks," Proc. of IEEE GLOBECOM'04, 2004.
[9] J. D. Abdulai, M. Ould-Khaoua, and L. M. Mackenzie, "Improving Probabilistic Route Discovery in Mobile Ad Hoc Networks," Proc. Of IEEE Conference on Local Computer Networks, pp. 739-746, 2007.
[10] Network Simulator - ns-2 http://www.isi.edu/nsnam/ns/.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Motion analysis in video surveillance using edge detection techniques |
Country | : | India |
Authors | : | Anupam Mukherjee, Debaditya Kundu |
: | 10.9790/0661-1261015 |
Abstract: Motion tracking is an important task in image processing applications. To track moving objects and their interaction in a complex environment is a difficult task, this work basically explains the technique of tracking moving objects. Moving object detection can be accomplished by image capturing, background subtraction and Prewitt edge detection operator. The main idea of our approach, called the background subtraction technique, is to subtract directly between two consecutive frames to extract the difference image. The difference image marks the areas where a moving object was in frame N and where the object is in frame N+1, respectively. Prewitt operator is more suitable in case of moving object analysis.
Keywords - Background subtraction, Canny edge detection operator, Edge detection, Motion Tracking, Moving object detection, Noise reduction, Prewitt edge detection operator.
[1] Rafael C. Gonzalez, Richard E.Woods, Digital Image Processing, Prentice Hall
[2] J.Canny, "A Computational Approach to Edge Detection", IEEE Trans.1986.
[3] Arnab Roy, Sanket Shinde and Kyoung-Don Kang, International Journal of Image Processing(IJIP), Voloume (2).
[4] A.Mukherjee, "Edge detection based motion tracking in video survellance", Proc. International Conference on Signal and Image Processing,2013.(Conference Proceedings)
[5] Merin Antony A,JAnitha, "A Survay of moving object segmentation methods".
[6] M.Piccardi,"Background subtraction technique: a review", IEEE Conference on System,Man and Cybernetics.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Detecting Spam Tags Against Collaborative Unfair Through Trust Modelling |
Country | : | India |
Authors | : | N. Shravani, Dr. P. Govardhan |
: | 10.9790/0661-1261619 |
Abstract: In the past few years sharing photos, within social networks has become very popular .In order to make these huge collection easier to explore, images are usually tagged with representative keywords such as persons, events, objects, and locations. In order o speed up the time consuming tag annotation process, tags can be propagated based on the similarity between image content and context .In this paper, daily and continuous communication implies the exchange of several types of content, including free text, image, audio and video data. Based on the established correspondences between these two image sets and the reliability of the user, tags are propagated from the tagged to the untagged images. The user trust modeling reduces the risk of propagating wrong tags caused by spamming or faulty annotation. The effectiveness of the proposed method is demonstrated through a set of experiments On an image database containing various landmarks. Tagging in online social networks is very popular these days as it facilitates search and retrieval of multimedia content .However, noisy and spam annotations often make it.
Keywords – annotation process, audio and video data, trust modeling, tagging, spams.
[1] kyuoo, and D. Su, "Towards the semantic Web: Collaborative tag suggestions," in Proc. ACM WWW, May 2006, pp. 1–8.
[2] ive sources in a hyperlinked environment," JACM, vol. 46, no. 5, pp. 604–632, Sept. 1999
[3] page, s.brin, r. motwani, and t.winograd. the pagerank citation ranking:bringing order to the web. technical report, stanford university, 1998.
[4] m. richardson, r..agrawal, and p. domingos. trust management for the semantic web. in web, proceedings of the second international semantic web conference,2003.
[5] K, J.-S. Lee, L. Goldmann, and T. Ebrahimi, "Geotag propaga- tion in social networks based on user trust model," Multimedia Tools Applicat., pp. 1–23, July 2010.
[6] ML. von Ahn, B. Maurer, C. Mcmillen, D. Abraham, and M. Blum, "reCAPTCHA: Human-based character recognition via Web security measures," Science, vol. 321, no. 5895, pp. 1465–1468, Aug. 2008.
[7] petrusel, r.. stanciu, p.l.: making recommendations for decision processes based on aggregated decision data models. in abramowicz, w. et al.(eds.): bis 2012, lnbip 117, pp. 272–283. springer-verlag berlin heidelberg (2012)
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Computers have become indispensable in all domains, and the medical segment does not represent an exception. The need for accuracy and speed has led to a tight collaboration between machines and human beings. Maybe the future will allow the existence of a world where the human intervention won't be necessary, but for now,the best approach in the medical field is to create semiautomatic applications, in order to help the doctors with the diagnoses, with following the patients' evolution and managing them and with other medical activities. Our application is designed for automatic measurements of orthopedic parameters, and allows the possibility of human intervention in case the parameters have not been detected properly. The segment of the application is Hip Arthroplasty. And Wavelet transforms and other multi-scale analysis functions have been used for compact signal and image representations in de-noising, compression and feature detection processing problems for about twenty years. Numerous research works have proven that space-frequency and space-scale expansions with this family of analysis functions provided a very efficient framework for signal or image data. The wavelet transform itself offers great design flexibility. Basis selection, spatial-frequency tiling, and various wavelet threshold strategies can be optimized for best adaptation to a processing application, data characteristics and feature of interest. Fast implementation of wavelet transforms using a filter-bank framework enable real time processing capability. Instead of trying to replace standard image processing techniques, wavelet transforms offer an efficient representation of the signal, finely tuned to its intrinsic properties. By combining such representations with simple processing techniques in the transform domain, multi-scale analysis can accomplish remarkable performance and efficiency for many image processing problems. Multi-scale analysis has been found particularly successful for image de-noising and enhancement problems given that a suitable separation of signal and noise can be achieved in the transform domain (i.e. after projection of an observation signal) based on their distinct localization and distribution in the spatial-frequency domain. With better correlation of significant features, wavelets were also proven to be very useful for detection {jin_Mallat_1992a} and matching applications {jin_Strickland_1995}.
Key-Words - Hip Arthroplasty, Canny Edge Detection, DICOM, Hough Transform, Radiographic Image Processing De-noising, Segmentation
[1] Image Analysis by local 2-D Spectral Signatures., Journal of Optical Society of American, A, Vol. 2, pp. 74, 1985. \bibitem{jin_Daugman_1988} Daugman.
[2] Adapting to Unknown Smoothness via Wavelet Shrinkage., Journal of American Statistics Association, Vol. 90, No. 432, pp. 1200-1224, 1995c.
[3] Bankman I, Handbook of Medical Image Processing and Analysis, Academic Press, 2000. [4] Feng D D, Biomedical Information Technology, Elsevier, 2008.
[5] Chen Y, Ee X, Leow K W, Howe T S, Automatic Extraction of Femur Contours from Hip X rayimages, 2000.
[6] Gonzales R C, Woods R E, Digital Image Processing, Prentice-Hall, 2002.
[7] Canny J F, A Computational Approach to Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 1986.
[8] Campilho A, Kamel M, Image Analysis and Recognition, Springer. Raj K Sinha, Hip Replacement.
[10] Kennon R, Hip and Knee Surgery: A Patient's Guide to Hip Replacement, Hip Resurfacing, Knee Replacement, and Knee Arthroscopy Book Description.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Adaptive Search Based On User Tags in Social Networking |
Country | : | India |
Authors | : | Ch. Priyanka, Dr. P. Govardhan |
: | 10.9790/0661-1263235 |
Abstract: With the popularity of the network and development of multimedia retrieval and relevant technology, the traditional information retrieval techniques do not meet the users' demand. We empirically validate this approach on the social photo-sharing site Flickr, which allows users to annotate images with freely chosen tags and to search for images labeled with a certain tag. In the existing system we are going to use the content based image retrieval methodologies which were not sufficient to retrieve the required image so, we are going to propose a new methodology called as tagging. The results of our study show that it is possible to use social tags to improve the accessibility. We use metadata associated with images tagged with an ambiguous query term to identify topics corresponding to different senses of the term, and then personalize results of image search by displaying to the user only those images that are of his/her interest.
Keywords—Tagged image search, Topic model, Image retrieval, Tagging, Reranking.
[1] B. Smyth, "A community-based approach to personalizing web search, ―Computer, vol. 40, no. 8, pp. 42–50, 2007. [2] S. Xu, S. Bao, B. Fei, Z. Su, and Y. Yu, "Exploring folksonomy for personalized search,‖ in SIGIR, 2008, pp. 155–162.
[3] D. Carmel, N. Zwerdling, I. Guy, S. Ofek-Koifman, N. Har'El, I. Ronen, E. Uziel, S. Yogev, and S. Chernov, "Personalized social search based on the user's social network,‖ in CIKM, 2009, pp. 1227–1236.
[4] Y. Cai and Q. Li, "Personalized search by tag-based user profile and resource profile in collaborative tagging systems,‖ in CIKM, 2010, pp. 969–978.
[5] D. Lu and Q. Li, "Personalized search on flickr based on searcher's preference prediction,‖ in WWW (Companion Volume), 2011, pp. 81–82.
[6] P. Heymann, G. Koutrika, and H. Garcia-Molina, "Can social bookmarking improve web search?‖ in WSDM, 2008, pp. 195–206.
[7] S. Bao, G.-R. Xue, X. Wu, Y. Yu, B. Fei, and Z. Su, "Optimizing web search using social annotations,‖ in WWW, 2007, pp. 501–510.
[8] D. Zhou, J. Bian, S. Zheng, H. Zha, and C. L. Giles, "Exploring social annotations for information retrieval,‖ in WWW, 2008, pp. 715–724.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | A Novel Approach for Tracking with Implicit Video Shot Detection |
Country | : | India |
Authors | : | Kiran S., Amith Kamath B. |
: | 10.9790/0661-1263642 |
Abstract: Video shot detection – Shot change detection is an essential step in video content analysis. The field of Video Shot Detection (VSD) is a well exploited area. In the past, there have been numerous approaches designed to successfully detect shot boundaries for temporal segmentation. Robust Pixel Based Method is used to detect shot changes in a video sequence. Tracking algorithm is a time consuming process due to the large amount of data contained in video using the video shot detection the computational cost can be reduced to a great extent by the discarding the frames which are not of any interest for the tracking algorithm. In this paper we present a novel approach of combining the concepts of Video shot detection and Object tracking using particle filter to give us a efficient Tracking algorithm with implicit shot detection.
Keywords – Bhattacharyya distance, Local adaptive threshold, Particle filter, Robust Pixel Difference method, Residual Re-sampling, Shot detection.
[1] The computation of the bhattacharyya distance between histograms without histograms S´everine Dubuisson Laboratoire d'Informatique de Paris 6, Universit´e Pierre et Marie Curie. Presented at: TIPR'97, Prague 9-11 June, 1997.and publiseed in Kybernetika, 34, 4, 363-368, 1997.
[2] An Efficient Fixed-Point Implementation of Residual Resampling Scheme for High-Speed Particle Filters Sangjin Hong, Member, IEEE, Miodrag Bolic´, Student Member, IEEE, and Petar M. Djuric´, Senior Member, IEEE IEEE SIGNAL PROCESSING LETTERS, VOL. 11, NO. 5, MAY 2004.
[3] An adaptive color-based particle filter Katja Nummiaroa,*, Esther Koller-Meierb, Luc Van Gool Katholieke Universiteit Leuven, ESAT/PSI-VISICS, Kasteelpark Arenberg 10, 3001 Heverlee, Belgium Swiss Federal Institute of Technology (ETH), D-ITET/BIWI, Gloriastrasse 35, 8092 Zurich, Switzerland..
[4] A Shot Boundary Detection Technique Based On Local Color Moments In Ycbcr Color Space S.A.Angadi and Vilas Naik Natarajan Meghanathan, et al. (Eds): SIPM, FCST, ITCA, WSE, ACSIT, CS & IT (2012)
[5] Keyframe Based Video Summarization Using Automatic Threshold & Edge Matching Rate. Mr. Sandip T. Dhagdi, Dr. P.R. Deshmukh ,International Journal of Scientific and Research Publications, July (2012)
[6] Real-time shot detection based on motion analysis and multiple low-level techniques.Carlos Cuevas and Narciso Garcia,Grupo de Tratamiento de Imagenes - E.T.S. Ing. Telecomunicacion (2010)
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Human beings have the real intelligence. The intelligence triggers new thoughts in mind. Human thoughts so many things but he may take long times to solve a complex problem. If he builds such a system which work as like human intelligence, then the time taken to solve the complex problem may be very less. In this case he provides the Artificial Intelligence (AI) to the system. Artificial intelligence based system has the ability to mimic the functions of the human brain. An intelligent agent works on behalf of man. What will happen if send the intelligent agent in new environment? It can work properly or not properly in the new environment. If we provide such intelligence to the agent that it works proper in the new environment without changing their set of rules. Such type of intelligence generally known as Universal Artificial Intelligence (UAI). This paper suggests an idea to build such an intelligent agent that attempts to take the right decision in the new environment. Here we will use the neuro-fuzzy system to provide the more intelligence to agent and this agent can take right decision with learning capability in new environment. If an agent has more intelligence than other agent we can call it super intelligent agent. This paper also shows the simulation of intelligent agent to avoid obstacle in new environment. This simulated intelligent agent shows the good result as compared to existing work.
Keywords- Universal Artificial Intelligence, Hidden Markov Model, Neuro-Fuzzy Systems
[1] Juan José Serrano, Arnulfo Alanis Garza, , Rafael Ors Carot, José Mario García-Valdez, Hector Arias, Karim Ramirez and Jose Soria, -Monitoring and Diagnostics with Intelligent Agents using Fuzzy Logic, Advance on-line publication: fifteen August 2007.
[2] Serban Gabriela, "A Reinforcement Learning Intelligent Agent", Studia Univ. Babes–Bolyai, Informatica, 2001.
[3] Raymond J. Hickey, Christopher J. Hanna, Michaela M. Black and Darryl K. Charles, "Modular Reinforcement Learning Architectures for Artificially Intelligent Agents in Complex Game Environments", IEEE Conference on Computational Intelligence and Games, 2010.
[4] Eleni Mangina, "Intelligent Agent-Based Monitoring Platform for Applications in Engineering", International Journal of Computer Science and Applications, 2005.
[5] Waleed H. Abdulla, Kevin I-Kai Wang, and Zoran Salcic, "Multi-agent Software Control System with Hybrid Intelligence for Ubiquitous Intelligent Environments", Springer-Verlag Berlin Heidelberg, 2007.
[6] F. Naghdy & X. Zhang , "Human Motion Recognition through Fuzzy Hidden Markov Model", University of Wollongong, Research Online.
[7] Kowalski Robert, "The Logical Way to Be Artificially Intelligent", Imperial College London.
[8] L. Moreno , G.N. Marichal , L. Acosta, , J.A. M-endez, J.J. Rodrigo, M. Sigut, "Obstacle avoidance for a mobile robot: A neuro-fuzzy approach", Fuzzy Sets and Systems, 2001, Elsevier.
[9] Zhen Liu , German Florez-Larrahondo, Yoginder S. Dandass, Rayford Vaughn and Susan M. Bridges, "Integrating Intelligent Anomaly Detection Agents into Distributed Monitoring Systems", Journal of Information Assurance and Security 1 (2006).
[10] Browning B., Argall B., & Veloso M. 2007, "Learning by demonstration with critique of a human teacher", In Proceedings of the Second ACM/IEEE International Conference on Human Robot Interaction
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Presently, users are facing many complicated and complex task-oriented goals on the search engine. Those are managing finances, making travel arrangements or any other planning and purchases. To reduce this problem, usually break down the tasks into a few codependent steps and issuing multiple queries, and which store repeatedly over a long period of time, whatever the user search in the search engine, that information search engines keep track of their queries and clicks while search in the search engine or online. In this paper we become skilled at the complexity of organizing user's historical queries into in an active and expected manner. Automatic identifying query groups are compassionate for the number of different search engines, deals with applications. Those are result status, query suggestions, query alterations. In this we are proposing security for the related query groups. When we work in the single or any organization, security will provides the security for the user's data or information in the search engine or any data base.
Keywords - User history, search history, query clustering, query reformulation, click graph, task recognition, security.
[1] Heasoo Hwang, Hady W. Lauw, Lise Getoor, and Alexandros Ntoulas "Organizing User Search Histories" ", Knowledge ang Data Engineering, IEEE Transactions on volume:24 ,Issue:5 , 2012.
[2] J. Teevan, E. Adar, R. Jones, and M.A.S. Potts, "Information Re Retrieval: Repeat Queries in Yahoo's Logs," Proc. 30th Ann. Int'l ACM SIGIR Conf. Research and Development in Information Retrieval
[3] J.-R. Wen, J.-Y. Nie, and H.-J. Zhang, "Query Clustering Using User Logs," ACM Trans. in Information Systems,
[4] R. Baeza-Yates and A. Tiberi, "Extracting Semantic Relations from Query Logs," Proc. 13th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining (KDD), 2007.
[5] J. Han and M. Kamber," Data Mining: Concepts and Techniques". Morgan Kaufmann, 2000.
[6] W. Barbakh and C. Fyfe, "Online Clustering Algorithms," Int'l J. Neural Systems, vol. 18, no. 3, pp. 185-194, 2008.
[7] Data Mining Concepts M. Berry, and M. Browne, eds. World Scientific Publishing Company, 2006.
- Citation
- Abstract
- Reference
- Full PDF
Abstract:One of the most difficult tasks in the whole KDD process is to choose the right data mining technique, as the commercial software tools provide more and more possibilities together and the decision requires more and more expertise on the methodological point of view. Indeed, there are a lot of data mining techniques available for an environmental scientist wishing to discover some model from her/his data. This diversity can cause some troubles to the scientist who often have not a clear idea of what are the available methods, and moreover, use to have doubts about the most suitable method to be applied to solve a concrete domain problem. Within the data mining literature there is not a common terminology. A classification of the data mining methods would greatly simplify the understanding of the whole space of available methods. In this work, a classification of most common data mining methods is presented in a conceptual map which makes easier the selection process. Also an intelligent data mining assistant is presented. It is oriented to provide model/algorithm selection support, suggesting the user the most suitable data mining techniques for a given problem.
Keywords: Knowledge Discovery from Databases, Data Mining, Intelligent Decision Support System case - Base Reasoning.
[1] Gibert K, Rodríguez-Silva G, Rodríguez-Roda I, 2010: Knowledge Discovery with Clustering based on rules by States: A water treatment application. Environmental Modelling&Software 25:712-723
[2] Gibert K, J. Spate, M. Sànchez-Marrè, I. Athanasiadis, J. Comas (2008b): Data Mining for Environmental Systems. In Environmental Modeling, Software and Decision Support. pp 205-228.
[3] Pérez-Bonilla A, K. Gibert 2007: Automatic generation of conceptual interpretation of clustering. In Progress in Pattern Recognition, Image analysis and Applications. LNCS-4756:653-663. Springer
[4] Kdnuggets (2006): http://www.kdnuggets.com/polls/2006/data_mining_methods.htm. Data Mining Methods (Apr 2006)
[5] Spate J, K. Gibert, M. Sànchez-Marrè, E. Frank, J. Comas, I. Athanasiadis, R. Letcher 2006. Data Mining as a tool for environmental scientist. In process 1st IEMSs Workshop DM-TEST 2006.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Securing Image Steganogarphy Based on Visual Cryptography And Integer Wavelet Transform |
Country | : | Iraq |
Authors | : | Yasir Ahmed Hamza |
: | 10.9790/0661-1266065 |
Abstract: The increased use of internet communication has given rise to the field of image steganography and made it necessary to secure digital content. Current image steganography techniques lack novelty and are based on using traditional cryptography solutions for secret image that is embedded in cover image. This research paper proposed and implemented a new method to secure embedded secret image using visual cryptography. A two level integer wavelet transformation technique was applied on the cover image to obtain the coefficients which were used later during the secret image embedding process. The experimental results indicated that high invisibility was achieved for secret image and the ability to embed more than one secret image within the cover image. Also, the use of visual cryptography eliminated the need for permutation process which would otherwise be required for secret image.
Keywords: Cryptography, Image Steganography, Integer wavelet transform, Stego-image, Visual Cryptography
[1]
T. Markel, J.H.P. Eloff, and M.S. Olivier, "An overview of image steganogrphy", Proceedings of the Fifth Annual Security South Africa Conference (ISSA2005), Sandton, South Africa, 2005.
[2]
N. Hamid, A. Yahya, R. Ahmed, and O.M. Al-Qershi, "Image steganography techniques: an overview", International Journal of Computer Science and Security (IJCSS), Vol.6, Issue 3, 2012.
[3]
S. Hemalatha, Dinesh A.U., A. Renuka, and R.K. Pariya, "A secure and high capacity image steganography technique", Signal and Image Processing: an International Journal (SIPIJ), Vol.4, No. 1, 2013.
[4]
M.F. Tolba, M.A. Ghonemy, I.A. Taha, and A.S. Khalifa, "Using integer wavelet transforms in colored image-steganography", International Journal of Intelligent Computing and Information Science, Vol. 4, No. 2, 2004.
[5]
S. Kumar, and S.K. Muttoo, "A comparative study of image steganography in wavelet domain", International Journal of Computer Science and Mobile Computing, Vol.2, Issue 2, 2013, pp. 91-101.
[6]
M. Noar, and A. Shamir, "Visual cryptography", Advances in Cryptography: Eurpocrypt '94, Springer-Verlag, Berlin, 1994, pp. 1-12.
[7]
S. Chandramathi, K.R. Ramesh, and S. Harish, "An overview of visual cryptography", International Jounral of Computational Intelligence Techniques, Vol. 1, Issue 1, 2010, pp. 32-37.
[8] S.K. Jinna, and L.Ganesan, "Reversible image data hiding using lifting wavelet transform and histogram shifting", International Journal of Computer Science and Information Security, Vol. 7, No. 3, 2010.
[9] P. Ganesan, and R. Bhavani, "A high secure image steganography using dual wavelet and blending model", Journal of Computer Science, Vol. 9, Issue 3, 2013, pp. 277-284.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: In data mining difficulties are encounters when applying machine learning techniques to real-world data, which frequently show skewness properties. A typical example from industry where skewed data is an intrinsic problem is fraud detection in finance data, medical diagnosis on rare disease, finding network intrusion in network. This problem is also known as class imbalance problem. The class imbalance problem define as the sample of one class may be much less number than another class in data set. There are many technology developed for handling class imbalance. Basically designed approaches are divided into two types. First is designed a new algorithm which improves the minority class prediction, second modify the number samples in existing class, it also known as data pre-processing. Under-sampling is a very popular data pre-processing approach to deal with class imbalance problem. Under-sampling approach is very efficient, it only use the subset of the majority class. The drawback of under-sampling is that it removes away many useful majority class samples. To solve this problem we propose multi cluster-based majority under-sampling and random minority oversampling approach. Compared to under-sampling, cluster-based random under-sampling can effectively avoid the important information loss of majority class.
Keyword: Skewed data, Random under-sampling, class Imbalance problem, clustering, imbalance dataset.
[1] Shuo Wang, Member, and Xin Yao, ―Multiclass Imbalance Problems: Analysis and Potential Solutions‖, IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 42, No. 4, August 2012.
[2] Tomasz Maciejewski and Jerzy Stefanowski, ―Local Neighbourhood Extension of SMOTE for Mining Imbalanced Data‖ 2011 IEEE
[3] Gang Wu and Edward Y. Chang, ―KBA: Kernel Boundary Alignment Considering Imbalanced Data Distribution‖ IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 17, NO. 6, JUNE 2005
[4] Xinjian Guo, Yilong Yin1, Cailing Dong, Gongping Yang, Guangtong Zhou, ―On the Class Imbalance Problem‖ Fourth International Conference on Natural Computation
[5] Jia Li, Hui Li *, Jun-Ling Yu, ―Application of Random-SMOTE on Imbalanced Data Mining‖, 2011 Fourth International Conference on Business Intelligence and Financial Engineering
[6] Yuchun Tang, Yan-Qing Zhang, , Nitesh V. Chawla, and Sven Krasser, ―SVMs Modeling for Highly Imbalanced Classification‖, Journal of LATEX class files, vol. 1, no. 11, November 2002
[7] Mikel Galar, Fransico, ―A review on Ensembles for the class Imbalance Problem: Bagging, Boosting and Hybrid- Based Approaches‖ IEEE Transactions On Systems, Man, And Cybernetics—Part C: Application And Reviews, Vol.42,No.4 July 2012
[8] Chris Seiffert, Taghi M. Khoshgoftaar, ―Mining Data with Rare Events: A Case Study‖ 19th IEEE International Conference on Tools with Artificial Intelligence 2007
[9] Chris Seiffert, Taghi M. Khoshgoftaar, Jason Van Hulse, and Amri Napolitano ―RUSBoost: A Hybrid Approach to Alleviating Class Imbalance‖IEEE Transactions On Systems, Man, And Cybernetics—Part A: Systems And Humans, Vol. 40, No. 1, January 2010. [10] Tomasz Maciejewski and Jerzy Stefanowski ―Local Neighborhood Extension of SMOTE for Mining Imbalanced Data‖ 2011 IEEE.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Concepts and Derivatives of Web Services |
Country | : | Malaysia |
Authors | : | Atieh Khanjani , Wan Nurhayati Wan Ab. Rahman |
: | 10.9790/0661-1267478 |
Abstract: Since the web services are growing rapidly in cloud, service consumers and providers are looking for a means to find better services that satisfy both parties. From both user's and developer's perspectives, disco-vering functional and non-functional characteristics of a web service is essentially considerable. Due to over-coming the above issues many research have been published to improve the usage of web services to satisfy customers. The paper reviews the literature of web services with respect to quality of service (QoS) or non-functional properties, to acquire better understanding of concepts and issues related to QoS web service select-ing and discovering processes.
Keywords - web service, QoS, standards, UDDI, WSDL, SOAP, non-functional properties
[1] Kritikos K and Plexousakis D, Requirements for QoS-Based Web Service Description and Discovery. Services Com puting, IEEE Transactions on 2, 2009, 320-37.
[2] Menasce DA, QoS issues in Web services. Internet Computing, IEEE 6, 2002, 72-5.
[3] Menasce DA, Composing Web Services: A QoS View. Internet Computing, IEEE 8, 2004, 88-90.
[4] Zibin Z, Yilei Z, and Lyu MR, Distributed QoS Evaluation for Real-World Web Services. In Web Ser vices (ICWS) IEEE International Conference on. (ed.), Vol., 2010, pp. 83-90. [5] Levitt J, From EDI To XML And UDDI. In A Brief History Of Web Services. (ed.), Vol., 2001, pp.
[6] Ran S, A model for web services discovery with QoS. SIGecom Exch., 4, 2003, 1-10.
[7] Yan-ping Chen Z-zL, Qin-xue Jin, and Chuang Wang, Study on QoS Driven Web Services Composi tion. Springer-Verlag Berlin Heidelberg, 3841, 2006, 702-7.
[8] Yin B, Yang H, Fu P, Gu L, and Liu M, A framework and QoS based web services discovery. In Soft ware Engineer ing and Service Sciences (ICSESS), IEEE International Conference on. (ed.), Vol., 2010, pp. 755-8,
[9] W.N.WanAbRahman and F.Meziane, Challenges to Describe QoS Requirements for Web Services Quality Prediction to Sup port Web Services Interoperability in Electronic Commerce. Communications of the IBIMA, 4, 2008, 50-8.
[10] Yu WD, Radhakrishna RB, Pingali S, and Kolluri V, Modeling the Measurements of QoS Require ments in Web Service Systems. Simulation 83, 2007, 75-91.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Improvement of Image Deblurring Through Different Methods |
Country | : | India |
Authors | : | A. Anusha , Dr. P. Govardhan |
: | 10.9790/0661-1267982 |
Abstract: In this paper, we analyze the research on this topic both theoretically and experimentally through different methods the deterministic filter; Bayesian estimation, the conjunctive deblurring algorithm (CODA),and alpha tonal correction method which performs the deterministic filter and Bayesian estimation in in a conjunctive manner. We point out the weakness of the deterministic filter and unify the limitation latent in two kinds of Bayesian estimators. .The proposed algorithm, alpha tonal correction methods, which gives better performance than the deterministic filter and sharp image estimation. We point out the weaknesses of the deterministic filter and unify the limitation latent in two kinds of image estimation methods. I further explain proposed alpha correction method which can able to handle quite large blurs beyond deterministic filter and image estimation. Finally, I demonstrate that our method outperforms state-of-the-art methods with a large margin.
Key words -Blind image DE convolution, image sharpening, alpha Tonal correction and deterministic filter.
[1]. Kundur andD.Hatzinakos,"Blind image deconvolution,"IEEESignal Process. Mag., vol. 13, no. 3, pp. 43–64, May 1996.
[2]. J. Cai, H. Ji, C. Liu, and Z. Shen, "Blind motion deblurring from a single image using sparse approximation,"
inProc.IEEEConf.Comput.Vis.Pattern Recog., 2009, pp. 104–111.
[3]. N.Joshi, R.Szeliski,andD.Kriegman,"PSFestimationusingsharpedgeprediction,"inProc.IEEEConf. Comput. Vis. Pattern Recog.,
2008
[4]. Gilboa, N. Sochen, and Y. Y. Zeevi"Forward-and-backward diffusion Processes foradaptive image enhancement and denoising,"
IEEETrans. Image Process., vol. 11, no. 7, pp.689–703, Jul. 2002
[5]. Y . Z h a n g , C . W e n , a n d Y . Z h a n g "estimation of motion parametersFrom blurred images"Pattern
recognitionLetters.vol 21 p, pp. 425 433, 2000
[6]. R. Fergus, B. Singh, A. Hertzmann, S. T.Roweis, and W. T. Freeman, "Removing camera shake from a single photograph," ACM
Trans. Graph., vol. 25, no. 3, pp.787–794, Jul. 2006.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: In this paper, we develop a heart disease prediction model that can assist medical professionals in predicting heart disease status based on the clinical data of patients. Firstly, we select 14 important clinical features, i.e., age, sex, chest pain type, trestbps, cholesterol, fasting blood sugar, resting ecg, max heart rate, exercise induced angina, old peak, slope, number of vessels colored, thal and diagnosis of heart disease. Secondly, we develop an prediction model using J48 decision tree for classifying heart disease based on these clinical features against unpruned, pruned and pruned with reduced error pruning approach.. Finally, the accuracy of Pruned J48 Decision Tree with Reduced Error Pruning Approach is more better then the simple Pruned and Unpruned approach. The result obtained that which shows that fasting blood sugar is the most important attribute which gives better classification against the other attributes but its gives not better accuracy.
Keywords—Data mining, Reduced Error Pruning, Gain Ratio and Decision Tree.
[1.] Wu R, Peters W, Morgan MW. The next generation clinical decision support: linking evidence to best practice. J Healthc Inf Manag, 2002; 16:50-5.
[2.] Thuraisingham BM. A Primer for Understanding and applying data mining. IT Professional 2000; 1:28-31.
[3.] Rajkumar A, Reena GS. Diagnosis of heart disease using datamining algorithm. Global Journal of Computer Science and Technology 2010; 10:38-43.
[4.] Anbarasi M, Anupriya E, Iyengar NCHSN. Enhanced prediction of heart Disease with feature subset selection using genetic algorithm. International Journal of Engineering Science and Technology 2010; 2:5370-76.
[5.] Palaniappan S, Awang R. Intelligent heart disease prediction system using data mining techniques. International Journal of Computer Science and Network Security 2008; 8:343-50.
[6.] J. Han, M. Kamber, Data Mining: Concepts and Techniques, 2nd Edition, Morgan Kaufmann, 2006.
[7.] T. Mitchell, Machine Learning, McGraw Hill, 1997.
[8.] J.R. Quinlan: C4.5, Programs for MachineLearning, Morgan Kaufmann, 1993.
[9.] M. Last and O. Maimon, "A Compact and Accurate Model for Classification", IEEE Transactions on Knowledge and Data Engineering 2004; 16, 2: 203-215.
[10.] O. Maimon and M. Last, Knowledge Discovery and Data Mining – The InfoFuzzy Network (IFN) Methodology, Kluwer Academic Publishers, Massive Computing, Boston, December 2000.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Focusing on engineering computing and optimization tasks, this paper investigates secure outsourcing of widely applicable linear programming (LP) computations. In order to achieve practical efficiency, our mechanism design explicitly decomposes the LP computation outsourcing into public LP solvers running on the cloud and private LP parameters owned by the customer. The resulting flexibility allows us to explore appropriate security/ efficiency tradeoff via higher-level abstraction of LP computations than the general circuit representation. In particular, by formulating private data owned by the customer for LP problem as a set of matrices and vectors, we are able to develop a set of efficient privacy-preserving problem transformation techniques, which allow customers to transform original LP problem into some arbitrary one while protecting sensitive input/output information. To validate the computation result, we further explore the fundamental duality theorem of LP computation and derive the necessary and sufficient conditions that correct result must satisfy. Such result verification mechanism is extremely efficient and incurs close-to-zero additional cost on both cloud server and customers. Extensive security analysis and experiment results show the immediate practicability of our mechanism design.
Keywords: cloud customer, cloud server,fully homomorphic encryption (FHE), linearprogramming,
[1] P. Mell and T. Grance, "Draft nist working definition of cloud computing,"Referenced on Jan. 23rd, 2010 Online at http://csrc.nist.gov/ groups/SNS/cloud-computing/index.html, 2010.
[2] Cloud Security Alliance, "Security guidance for critical areas of focus incloud computing," 2009, online at http://www.cloudsecurityalliance.org.
[3] C. Gentry, "Computing arbitrary functions of encrypted data," Commun.ACM, vol. 53, no. 3, pp. 97–105, 2010.
[4] Sun Microsystems, Inc., "Building customer trust in cloud computingwith transparent security," 2009, online at https://www.sun.com/offers/details/sun transparency.xml.
[5] M. J. Atallah, K. N. Pantazopoulos, J. R. Rice, and E. H. Spafford,"Secure outsourcing of scientific computations," Advances in Computers,vol. 54, pp. 216–272, 2001.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Green computing concept is to improve environmental condition. The main aim of green computing is to reduce toxic materials. We systematically analyze its energy consumption which is based on types of services and obtain the conditions to facilitate green cloud computing to save overall energy consumption in this system. Today it is the major issue to prepare such equipments by which we achieve efficient energy and to minimize of e-waste and use of non toxic chemicals/materials in preparation of e-equipments. We can implement green computing in computer's fields as CPU servers and other peripheral devices (mobile devices). By using green computing we can reduce resources consumption and disposal of electric waste (e-waste). It has been seen that computers and other electronics devices are increasing day by day, so the amount of electricity consumed by them is also increasing. In this way the percentage of co2 in the atmosphere is also increasing. The other toxic materials which are used in computer/electronics industry are also harmful for environment. In this paper, we will elaborate comprehensively survey the concepts and architecture of green computing, as well as its heat and energy consumption issues. Their pros and cons are discussed for each green computing strategy with its friendly approach towards atmosphere. Green computing can facilitate us to safe, secure place and healthy environment all over in the world. This paper will help us to take some initiatives currently under in the field of computers/electronics industry and new ways to save vast amounts of energy which is wasted in very large scale.
Keywords: Green Computing, toxic material, e-waste, e-equipments, peripheral devices
[1] Green computing- New Horizon of Energy Efficiency and E-waste Minimization-World Perspective vis-a vis Indian Seenario by Sanghita Roy and Manigrib Bag 65/25 Jyotish Roy Road, New Alipore, Kolkata 700053, India
[2] Green computing practiceof Efficient and Eco-Friendly computing Resources by parichay chakraborty, Debnath Bhattacharyya, Sattarova NargizaYand Sovan Bedajna International Journal of of Grid and Distributed Computing vol.2.No.3 September 2009
[3] Green Computing saves Green by Priya Rana Department of Information Technology, RKGIT, Ghaziabad International Journal of Advanced Computer and Mathematical Sciences Vol1,issue, Dec,2010,pp45-51
[4]. Green : The new computing coat of Arms by Joseph Willium and Lewis Curtis IT Pro January/February 2008 published by the IEEE computer society
[5] Method, Metrics and Motivation for a Green Computer Science Program by Mujaba Talebi and Thomas Way Applied Computing Technology Laboratory Department of computing Science Villanova University Villanova PA 19085 http://today.slac.stanford.edu/feature/hydrogen2.asp
[7] Green computing The Future of things http// WWW.thefutureoffthings.com 19 Nov 2007
[8] Toward Green cloud computing by Feng-Seng Chu, Kwang-Cheng Chen Graduate Institute of Communication Engineering National Taiwan University Taipei, Taiwan ICUIMC 11 February 21-23,2011, http://www.hostupon.com/network.html
[10]. Software or Hardware: The future of green Enterprise computing Page 185.14 pages