Volume-13 ~ Issue-1
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Cloud Computing for hand-held Devices:Enhancing Smart phones viability with Computation Offload |
Country | : | India |
Authors | : | Mohd. Abdul Salam |
: | 10.9790/0661-1310106 |
Abstract: Cloud computing is Modern day's wonder. It is not a product but a service, which provides shared resources, software, and information to computers and other devices like smart phones as a utility over a network mainly internet[1]. Resources namely memory, storage space, processor, etc are not available at user's end explicitly. Service providers own these resources and user access them via the Internet. It comes with many advantages for business like lower operation cost, low capital investment, shorter startup time for new services, lower maintenance cost. Cloud computing is a boon for shifting computing from desktops to cloud. Now the new paradigm should be cloud computing for mobile users. The limitations for mobile cloud computing are limited availability of energy and wireless bandwidth. Mobile Cloud Computing combines cloud computing and mobile resources to overcome obstacles related to the performance (like battery life and bandwidth), environment (heterogeneity, scalability, and availability), and security (reliability, security and privacy). In this paper it is discussed how cloud computing may provide energy saving to mobile users and hence increasing the battery life of the mobile.
Keywords-Computation Offloading, Mobile Cloud Computing, Cloud Computing.
[1] "The NIST Definition of Cloud Computing". National Institute of Science and Technology 24 July 2011.
[2] Hoang T. Dinh, Chonho Lee, Dusit Niyato, and Ping Wang: A Survey of Mobile Cloud Computing: Architecture, Applications, and Approaches, pp. 1-25
[3] L. Liu, R. Moulic, and D. Shea. January 2011, Cloud Service Portal for Mobile Device Management, pp. 474.
[4] Alexey Rudenko, Peter Reiher, GerMd J. Popek and Geoffrey H. Kuenning: Saving Portable Computer Battery Power through Remote Process Execution, Vol. 2, pp. 19-20
[5] Eduardo Cuervo, Aruna Balasubramanian, Dae-ki Cho, Alec Wolman, Stefan Saroiu, Ranveer Chandra, Paramvir Bahl: June 2010, MAUI-Making Smartphones Last Longer with Code Offload, pp. 49-50.
[6] A. Rudenko, P. Reiher, G. J. Popek, and G. H. Kuenning: Saving portable computer battery power through remote process execution, vol. 2, no. 1, January 1998.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | An effective citation metadata extraction process based on BibPro parser |
Country | : | India |
Authors | : | G. Guru Brahmam, A. Bhanu Prasad |
: | 10.9790/0661-1311216 |
Abstract: There is a dramatic increase in academic publications and these publications are integrated to the digital libraries by making use of citation string. Any author can publish the journals conferences with the help of his own citation style. There is no specific format to the conferences and journals citation styles that are publishing on digital libraries. As there is no specific format to the citations it is difficult to any author or researchers when he want to perform field based searching on digital libraries. So it is an interesting problem to extract the components of citations string which is formatted in one of thousand different citation styles. The proposed citation parser named BibPro extracts components of citations strings more accurately with that of existing systems and achieves reasonable performance.
Keywords: Data integration, digital libraries, information extraction, sequence alignment.
[1] D. Lee, J. Kang, P. Mitra, C.L. Giles, and B.-W. On, "Are Your Citations Clean?," Comm. ACM, vol. 50, pp. 33-38, 2007. [2] M. Cristo, P. Calado, M.A. Goncalves, E.S. de Moura, B. Ribeiro-Neto, and N. Ziviani, "Link-Based Similarity Measures for the Classification of Web Documents," J. Am. Soc. for Information Science and Technology, vol. 57, pp. 208-221, 2006.
[3] T. Couto, M. Cristo, M.A. Goncalves, P. Calado, N. Ziviani, E. Moura, and B. Ribeiro-Neto, "A Comparative Study of Citations and Links in Document Classification," Proc. Sixth ACM/IEEE-CS Joint Conf. Digital Libraries, 2006.
[4] M.A. Goncalves, B.L. Moreira, E.A. Fox, and L.T. Watson, ""What Is a Good Digital Library?‟ - A Quality Model for Digital Libraries," Information Processing and Management, vol. 43, pp. 1416-1437, 2007.
[5] S. Brin and L. Page, "The Anatomy of a Large-Scale Hypertextual Web Search Engine," Proc. Seventh Int‟l Conf. World Wide Web 7,1998.
[6] A.H.F. Laender, B.A. Ribeiro-Neto, A.S. da Silva, and J.S. Teixeira, "A Brief Survey of Web Data Extraction Tools," SIGMOD Record, vol. 31, pp. 84-93, 2002.
[7] C.L. Giles, K. Bollacker, and S. Lawrence, "CiteSeer: An Automatic Citation Indexing System," DL ‟98: Proc. Third ACM Conf. Digital Libraries, pp. 89-98, 1998.
[8] K.D. Bollacker, S. Lawrence, and C.L. Giles, "CiteSeer: An Autonous Web Agent for Automatic Retrieval and Identification of Interesting Publications," Proc. Second Int‟l Conf. Autonomous Agents, 1998.
[9] S. Lawrence, C.L. Giles, and K.D. Bollacker, "Autonomous Citation Matching," Proc. Third Ann. Conf. Autonomous Agents, 1999.
[10] S. Lawrence, C.L. Giles, and K.D. Bollacker, "Digital Libraries and Autonomous Citation Indexing," Computer, vol. 32, no. 6, pp. 67- 71, June 1999.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Understanding that the need for huge volume of data retrieved from the server requires a high performance system with versatile capability, the performance differs based on systems. Fundamentally there are many systems which will do the job of storing and retrieving the data from the server and one such system is Ajax. In this paper, we have worked on the server performance measures using AJAX mechanism. Though the performances of Ajax is low based on push and pull server functions, it has been overcome using state less push operation and pull operation. The result shown in this study provides a realistic approach for an efficient server mechanism.
Keywords: PUSH PULL architecture, AJAX framework, Website scalability, Architecture model, Server State notification
[1] Yen-Cheng chen, "Enabling Uniform Push Services for WAP and WWW", Proceedings of Workshop on the 21st Century Digital Life and Internet Technologies, 2001.
[2] Yang Zhao, "A Model of Computation with Push and Pull processing", Research project, University of California at Berkeley, December 16, 2003
[3] J SARAVANESH, Dr.E.RAMARAJ, "Scalable Transaction Authorization Using Role Based Access Control for Time Based Content Access with Session management", International Journal of Engineering Research and Development eISSN : 2278-067X, pISSN : 2278-800X, www.ijerd.com, 2012
[4] Engin Bozdag Ali Mesbah Arie van Deursen, "A Comparison of Push and Pull Techniques for AJAX" TUD-SERG-2007-016a
[5] Engin Bozdag Ali Mesbah Arie van Deursen, "Performance Testing of Data Delivery Techniques for AJAX Applications", TUD-SERG-2008-009
[6] V. Trecordi and G. Verticale. An architecture for effective push/pull web surfing. In 2000 IEEE International Conference on Communications, volume 2, pages 1159–1163, 2000.
[7] A. Mesbah and A. van Deursen. Migrating multi-page web applications to single-page Ajax interfaces. In CSMR '07: Proceedings of the 11th European Conference on Software Maintenance and Reengineering, pages 181–190. IEEE Computer Society, 2007.
[8] Mikko Pohja , "Server Push for Web Applications via Instant Messaging", Journal of Web Engineering, Vol. 9, No. 3 (2010) 227–242
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Web Data mining-A Research area in Web usage mining |
Country | : | India |
Authors | : | V. S. Thiyagarajan, Dr. K. Venkatachalapathy |
: | 10.9790/0661-1312226 |
Abstract: Data mining technology has emerged as a means for identifying patterns and trends from large quantities of data. The data mining technology normally adopts data integration method to generate data warehouse, on which to gather all data into a central site, and then run an algorithm against that data to extract the useful module prediction and knowledge evaluation. Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. The motive of mining is to find users' access models automatically and quickly from the vast Web log data, such as frequent access paths, frequent access page groups and user clustering. Through web usage mi ni ng , the server log, registration information and other relative information left by user access can be mined with the user access mode which will provide foundation for decision making of organizations. This article provides a survey and analysis of current Web usage mining systems and technologies.
Keywords: Web log, Session model, path completion
[1] Qingtian Han, Xiaoyan Gao, Wenguo Wu, "Study on Web Mining Algorithm Based on Usage Mining", Computer-Aided Industrial Design and Conceptual Design, 2008. CAID/CD 2008. 9th International Conference on 22-25 Nov. 2008
[2] Qingtian Han, Xiaoyan Gao, "Research of Distributed Algorithm Based on Usage Mining", Knowledge Discovery and Data Mining, 2009, WKDD 2009, Second International Workshop on 23-25 Jan. 2009
[3] Ranieri Baraglia and Fabrizio Silvestri, "An Online Recommender System for LargeWeb Sites", Web Intelligence, 2004. WI 2004. Proceedings. IEEE/WIC/ACM International Conference on 20-24 Sept. 2004
[4] Yan Li, Boqin Feng, Qinjiao Mao, "Research on Path Completion Technique in Web Usage Mining", Computer Science and Computational Technology, 2008. ISCSCT '08. International Symposium on Volume 1, 20-22 Dec. 2008
[5] Yi Dong, Huiying Zhang, Linnan Jiao, "Research on Application of User Navigation Pattern Mining Recommendation", Intelligent Control and Automation, 2006. WCICA 2006. The Sixth World Congress ,Volume 2
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Correlation Preserving Indexing Based Text Clustering |
Country | : | India |
Authors | : | Venkata Gopala Rao S., A. Bhanu Prasad |
: | 10.9790/0661-1312730 |
Abstract: In Document clustering previously they presented new document clustering method based on correlation preserving indexing. It simultaneously maximizes the correlation between the documents in the local patches and minimizes the correlation between the documents outside these patches. Consequently, a low dimensional semantic subspace is derived where the documents corresponding to the same semantics are close to each other with learning level parsing procedure based CPI method. The proposed CPI method with learning level parsing procedure is to find correlation between relational documents to avoid maximum unknown clusters those are not effectual to find exact correlation between documents depend on accuracy of sentences. The proposed CPI method with learning level parsing procedure in document clustering doubles the accuracy of previous correlation coefficient. The proposed hierarchical clustering algorithm behavior is different with CPI in terms of NMI, Accuracy.
Index Terms—Document clustering, correlation measure, correlation latent semantic indexing, dimensionality reduction.
[1] Taiping Zhang, yuan yan tan, Bin Fang Young Xiang " document clutering in correlation limilarity measure space" IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGNEERING, vol. 24, no.6, june 2012.
[2] R.T. Ng and J. Han, "Efficient and Effective Clustering Methods for Spatial Data Mining," Proc. 20th Int'l Conf. Very Large Data Bases (VLDB), page. 144-155, 1994.
[3] A.K. Jain, M.N. Murty, and P.J. Flynn, "Data Clustering: A Review," ACM Computing Surveys, vol. 31, no. 3, page. 264-323, 1999.
[4] P. Pintelas and S. Kotsiantis "Recent Advances in Clustering: A Brief Survey," WSEAS Trans. Information Science and Applications, vol. 1, no. 1, page. 73-81, 2004.
[5] J.B. MacQueen, "Some Methods for Classification and Analysis of Multivariate Observations," Proc. Fifth Berkeley Symp. Math. Statistics and Probability, vol. 1, page. 281-297, 1967.
[6] A.K. McCallum and L.D. Baker "Distributional Clustering of Words for Text Classification," Proc. 21st Ann. Int'l ACM SIGIR Conf. Research and Development in Information Retrieval, page. 96-103, 1998.
[7] X. Liu, Y. Gong, W. Xu, and S. Zhu, "Document Clustering with Cluster Refinement and Model Selection Capabilities," Proc. 25th Ann. Int'l ACM SIGIR Conf. Research and Development in Information Retrieval (SIGIR '02), page. 191-198, 2002.
[8] S.C. Deerwester, S.T. Dumais, T.K. Landauer, G.W. Furnas, and R.A. Harshman, "Indexing by Latent Semantic Analysis," J. Am. Soc. Information Science, vol. 41, no. 6, pp. 391-407, 1990.
[9] j. han and D. Cai, X. He, "Document Clustering Using Locality Preserving Indexing," IEEE Trans. Knowledge and Data Eng., vol. 17, no. 12, page. 1624-1637, Dec. 2005. [10] y. gong and W. Xu, X. Liu, "Document Clustering Based on Non- Negative Matrix Factorization," Proc. 26th Ann. Int'l ACM SIGIR Conf. Research and Development in Informaion Retrieval (SIGIR '03), page. 267-273, 2003.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: The present day is demanding more and more quality of service in broadband group communication to support huge access of internet service and multimedia application. The core based solution is able to full fill this demand a lot. In this paper effort has been put to make it more flexible in comparison to SPAN/COST through a new approach which can be an alternative solution of SPAN/ADJUST for solving the constrained of non singular core solution.
Keywords: Multicasting, QoS Routing, Core selection
[1] Ballardie, Core Based Trees (CBT version 2) Multicast Routing Protocol Specification, RFC 2198, September 1997.
[2] C. Shields, J.J. Garcia-Luna-Acevez, The ordered core based tree, protocol, IEEE INFOCOM (1997).
[3] D. Zappala A. Fabbri, V.Lo, An evaluation of multicast trees with multiple active cores, Journal of Telecommunication Systems,Kluwer, March 2002, pp. 461–479.
[4] A. Ballardie, Core Based Trees (CBT) Multicast Routing Architecture,RFC 2201, September 1997
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Among all the obstacles there existed and currently existing between mankind from very early age of human civilization to the age of modern globalization, language difference between humans has always been a crucial problem being a spine in the throat. This paper emphasizes on language interpretation device and its application in the modern world to alleviate the difference between humans and the discrimination between tongues within continents. The paper deals with a concept of HCI (Human Computer Interaction) in developing a language interpretation device which can automatically interpret multiple languages into a target language in a dynamic environment within a specific time. The concept is developed with a sense of mobility into action with an efficient method of consuming voice as input and developing a processed meaningful voice as output. The paper also demonstrates the work flow of the systems those are implemented in this device. The paper also describes the core system architecture and the function of component in the system. The effectiveness of the presented approach also is shown with some future works.
Keywords: Language interpretation, HCI (Human Computer Interaction), SRS (Speech Recognition System), mobility, language cartridge.
[1] J. M. Lande. Interpreting Device. United States Patent Office, Serial Number. 506,603, 3 Claims (Cl. 35-2), 1943.
[2] Stephen A. Rondel, Redmond and Joel R. Carter, Mukileto. Voice Language Translator. United States Patent Office, Application Number. 306,001,1989.
[3] Omar Mubin, Cristoph Bartneck and Loe Feijs .Towards the design and evaluation of ROILA: a speech recognition friendly artificial language. Advances in Neural Processing. Pages: 250-256, 2010.
[4] Xi Shi and Yangsheng Xu. A Wearable Translation Robot. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005.
[5] Martin Kay. The Proper Place of Men and Machines in Language Translations. Machine Translation, Volume: 12, Issue: CSL-80-11, Publisher: Springer, Pages: 3-23, 1997.
[6] Hy Murveit, John Butzberger, and Mitch Weintraub. Speech Recognition in SRI's Resource Management and ATIS Systems. Speech and Natural Language, Proceedings of a Workshop, HLT, 1991.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Improving Web Image Search Re-ranking |
Country | : | India |
Authors | : | L. Ramadevi, Ch. Jayachandra, D. Srivalli |
: | 10.9790/0661-1314450 |
Abstract:Nowadays, web-scale image search engines (e.g. Google Image Search, Microsoft Live Image Search) rely almost purely on surrounding text features. This leads to the ambiguous and noisy results. We propose an adaptive visual similarity to re-rank the text based search results. A query image is first categorized as one of several predefined intention categories, and a specific similarity measure has been used inside each of category to combine the image features for re-ranking based on the query image. The Extensive experiments demonstrate that using this algorithm to filter output of Google Image Search and Microsoft Live Image Search is a practical and effective way to dramatically improve the user's experience.
Keywords: Content Based Image Retrieval, Image Ranking, Image Searching, Semantic Matching, Visual Re-ranking, Image Ranking and Retrieval Techniques.
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley, 1999.
[2] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In Proceedings of 7th International World-Wide Web Conference, 1998.
[3] J. P. Callan, W. B. Croft, and S. M. Harding. The INQUERY retrieval system. In Proceedings of 3rd International Conference on Database and Expert Systems Applications, 1992.
[4] D. Hull. Using statistical testing in the evaluation of retrieval experiments. In Proceedings of the 16th SIGIR Conference, (1993), 329–338.
[5] J. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of ACM, 46(5),(1999),604–632.
[6] V. Lavrenko and W. B. Croft. Relevance-based language models. In Proceedings of the International ACM SIGIR Conference, 2001.
[7] R. Lempel and A. Soffer. Pic A SHOW: Pictorial authority search by hyperlinks on the web. In Proceedings of the 10th International World-Wide Web Conference, 2001.
[8] O. Maron and A. L. Ratan. Multiple-instance learning for natural scene classification. In Proceedings of the 15th International Confernece on Machine Learning (1998), 341–349.
[9] M. Porter. An algorithm for suffix stripping. Program, 14(3),(1980),130–137.
[10] C. J. van Rijsbergen. A theoretical basis for the use of cooccurrence data in information retrieval. Journal of Documentation33 ,( 1977),106–119
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Achieving Privacy in Publishing Search logs |
Country | : | India |
Authors | : | D. Srivalli, P. Nikhila, Ch. Jayachandra |
: | 10.9790/0661-1315160 |
Abstract: The "database of intentions," collects by the search engine companies for the histories of their users search queries. These searchlogs are a gold mine for researchers. The Search engine companies, however, are wary of publishing search logs in order not to disclose sensitive information. In this paper, we are analysing algorithms for publishing frequent queries, keywords and clicks of a search log. Our evaluation includes the applications that use search logs for improving both search experience and search performance, and our results show that the ZEALOUS' output is sufficient for these applications while achieving strong formal privacy guarantees. We are using two real applications from the information retrieval community: Index caching, as the representative application for search performance, and for the query substitution, as a representative application for search quality. For both applications, the sufficient statistics are histograms of keywords, queries, or query pairs.
Keywords: - web search, information technology, database management, data storage and retrieval.
[1] N. R. Adam and J. C. Wortmann, "Security-control methods for statistical databases: a comparative study", ACM Computing Surveys, 25(4), 1989.
[2] N. Ailon, M. Charikar, and A. Newman, "Aggregating inconsistent information: ranking and clustering", In STOC 2005, 684–693.
[3] A. Blum, C. Dwork, F. McSherry, and K. Nissim, " Practical privacy: The SuLQ framework", In PODS, 2005.
[4] S. Chawla, C. Dwork, F. McSherry, A. Smith, and H. Wee, "Toward privacy in public databases. In Theory of Cryptography Conference (TCC)",2005, 363–385.
[5] S. Chawla, C. Dwork, F. McSherry, and K. Talwar. "On the utility of privacy-preserving histograms", In 21st Conference on Uncertainty in Artificial Intelligence (UAI), 2005.
[6] C. Clifton, M. Kantarcioglu, J. Vaidya, X. Lin, and M. Y. Zhu. "Tools for privacy preserving data mining. SIGKDD Exploration",4(2),2002,28–34.
[7] I. Dinur and K. Nissim, "Revealing information while preserving privacy", In PODS, 2003,202–210.
[8] C. Dwork. "Differential privacy", In ICALP, 2006,1–12.
[9] B.-C. Chen, K. LeFevre, and R. Ramakrishnan,"Privacy skyline: privacy with multidimensional adversarial knowledge", In VLDB, 2007.
[10] V.Ciriani, S. De Capitani di Vimercati, S. Foresti, and P. Samarati. k-anonymity, "Secure Data Management in Decentralized Systems", 2007.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Malwise-Malware Classification and Variant Extraction |
Country | : | India |
Authors | : | P. Nikhila, D. Srivalli, L. Ramadevi |
: | 10.9790/0661-1316166 |
Abstract: Malware, short for malicious software, means a variety of forms of intrusive, hostile or annoying program code or software. Malware is a pervasive problem in distributed computer and network systems. Malware variants often have distinct byte level representations while in principal belong to the same family of the malware. The byte level content is different because of small changes to the malware source code can result in significantly different compiled object code. In this project we describe malware variants with the umbrella term of polymorphism. We are the first to use the approach of structuring and decompilation to generate malware signatures. We employ both dynamic and static analysis to classify the malware. Entropy analysis was initially determines if the binary has undergone a code packing transformation. If a packed, dynamic analysis employing application level emulation reveals the hidden code using entropy analysis to detect when unpacking is complete. Static analysis is then identifies characteristics, the building signatures for control flow of graphs in each procedure. Then the similarities between the set of control flow graphs and those are in a malware database accumulate to establish a measure of similarity. A similarity search is performed on the malware database to find similar objects to the query. Additionally, a more effective approximate flow graph matching algorithm is proposed that uses the decompilation technique of structuring to generate string based signatures amenable to the string edit distance. We use real and synthetic malware to demonstrate the effectiveness and efficiency of Malwise.
Keywords: Bayes classifier, Computer Security, Random forest, Spyware.
[1] J. O. Kephart and W. C. Arnold, "Automatic extraction of computer virus signatures," in 4th Virus Bulletin International Conference, 1994, 178-184.
[2] J. Z. Kolter and M. A. Maloof, "Learning to detect malicious executables in the wild," in International Conference on Knowledge Discovery and Data Mining, 2004, 470-478.
[3] M. E. Karim, A. Wallenstein, A. Lakhotia, and L. Parida, "Malware phylogeny generation using permutations of code," Journal in Computer Virology, vol. 1, 2005,13-23. [4] M. Gheorghescu, "An automated virus classification system," in Virus Bulletin Conference, 2005, 294-300.
[5] Y. Ye, D. Wang, T. Li, and D. Ye, "IMDS: intelligent malware detection system," in Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, 2007.
[6] E. Carrera and G. Erdélyi, "Digital genome mapping–advanced binary malware analysis," in Virus Bulletin Conference, 2004, 187-197.
[7] T. Dullien and R. Rolles, "Graph-based comparison of Executable Objects (English Version)," in SSTIC, 2005.
[8] I. Briones and A. Gomez, "Graphs, Entropy and Grid Computing: Automatic Comparison of Malware," in Virus Bulletin Conference, 2008, 1-12.
[9] S. Cesare and Y. Xiang, "Classification of Malware Using Structured Control Flow," in 8th Australasian Symposium on Parallel and Distributed Computing (AusPDC 2010), 2010.
[10] G. Bonfante, M. Kaczmarek, and J. Y. Marion, "Morphological Detection of Malware," in International Conference on Malicious and Unwanted Software, IEEE, Alexendria VA, USA, 2008, 1-8.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Captcha Recognition and Robustness Measurement using Image Processing Techniques |
Country | : | India |
Authors | : | Ramya T., Jayasree M. |
: | 10.9790/0661-1316772 |
Abstract: The advances in web-based technology have revolutionized the way people communicate and share information, necessitating firm security measures. Network security prevents and monitors unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. A CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a standard security mechanism for addressing undesirable or malicious Internet bot programs. CAPTCHA generates and grades tests that are human solvable, but beyond the capabilities of current computer programs. Captcha prevents quality degradation of a system by automated software, protect systems that are vulnerable to e- mail spam and minimizes automated posting to blogs, forums and wikis. This paper carries out a systematic study of various Text-based Captchas and proposes the application of Forepart based prediction and Character-Adaptive Masking to break these captchas to evaluate their robustness. Captcha segmentation and recognition is based on Forepart prediction, necessity sufficiency matching and Character-adaptive masking. Different classes of captchas like simple captchas, Botdetect captchas & Google captchas are taken into consideration.
Keywords : Captcha, Robustness, Segmentation, Character-adaptive masking.
[1]. Technical Report:Ahmad.Salah-El-Ahmad, Jeff.Yan, Mohomad.Tayara, ―The Robustness of Google CAPTCHAs‖ Technical report, Newcastle University, UK, 2011.
[2]. Note that the journal title, volume number and issue number are set in italics. Journal Papers:
[3]. Jiqiang Song , Zuo Li, Michael R. Lyu Shijie Cai‖ Recognition of Merged Characters Based on Forepart Prediction, Necessity-Sufficiency Matching, and Character-Adaptive Masking‖ IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 35, No. 1, February 2005 Proceedings Papers:
[4]. J Yan and A S El Ahmad. ―Breaking Visual CAPTCHAs with Naïve Pattern Recognition Algorithms‖, in Proc. Of the 23rd Annual Computer Security Applications Conference (ACSAC'07). FL, USA, Dec 2007. IEEE computer society. Pp 279-291.
[5]. G Mori and J Malik. ―Recognising objects in adversarial clutter: breaking a visual CAPTCHA‖,IEEE Conference on Computer Vision & Pattern Recognition (CVPR), 2003 , IEEE Computer Society ,vol. 1 ,pp.I-134-I-141, June 18-20 ,2003.
[6]. K Chellapilla, K Larson, P Simard and M Czerwinski, ―Building Segmentation Based Human-friendly Human Interaction Proofs‖, 2nd Int'l Workshop on Human Interaction Proofs, Springer-Verlag, LNCS 3517,2005
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Mining frequent itemsets and association rules is a popular and well researched approach for discovering interesting relationships between variables in large databases. Association rule mining is one of the most important techniques of data mining that aims to induce associations among sets of items in transaction databases or other data repositories. There are various Algorithms developed and customized to derive the effective rules to improve the business. Amongst all, Apriori algorithms and FP Growth Algorithms play a vital role in finding out frequent item set and subsequently deriving rule sets based on business constraints. However there are few shortfalls in these conventional Algorithms. They are i) candidate items generation consumes lot of time in the case of large datasets ii) It supports majorly the conjunctive nature of association rules iii) The single minimum support factor not suffice to generate the effective rules iv) 'support/confident' alone not helping to validate the rules generated and v) Negative rules are not addressed effectively. Points from i) to iv) were addressed in the earlier works [10][13] . However identifying and deriving negative rules are still a challenge. The proposed work is considered to be the extended version of our earlier work [13]. It focuses how effectively negative rules can be derived with the help of logical rules sets which was not addressed in our earlier work. For this exercise the earlier work has been taken as the reference and the appropriate modifications and additions are updated into it where ever applicable. Hence by using this approach conjunctive & disjunctive; positive& negative rules can be generated effectively in an optimized manner.
Keywords - Logical rule set, FP Growth Algorithm, Genetic Algorithm, Lift ratio, Multiple Minimum Support, Disjunctive Rules
[1] Anandhavalli M, Suraj Kumar Sudhanshu, Ayush Kumar and Ghose M.K. "Optimized association rule mining using genetic algorithm", Advances in Information Mining, ISSN: 1( 2), 2009, 0975–3265.
[2] Marcus C. Sampaio, Fernando H. B. Cardoso, Gilson P. dos Santos Jr.,Lile Hattori "Mining Disjunctive Association Rules" 15 Aug. 2008
[3] Bing Liu, Wynne Hsu and Yiming Ma "Mining Association Rules with Multiple Minimum Supports"; ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD-99), August 15-18, 1999, San Diego, CA, USA.
[4] Yeong-Chyi Lee a, Tzung-Pei Hong b,_, Wen-Yang Lin , Mining "Association Rules with Multiple Minimum Supports Using Maximum Constraints"; Elsevier Science, November `22, 2004.
[5] Michelle Lyman, "Mining Disjunctive Association Rules Using Genetic Programming" The National Conference On Undergraduate Research (NCUR); April 21-23, 2005
[6] Farah Hanna AL-Zawaidah, Yosef Hasan Jbara, Marwan AL-Abed Abu-Zanona, "An Improved Algorithm for Mining Association Rules in Large Databases" ; Vol. 1, No. 7, 311-316, 2011
[7] Rupesh Dewang, Jitendra Agarwal, "A New Method for Generating All Positive and Negative Association Rules"; International Journal on Computer Science and Engineering, vol.3,pp. 1649-1657,2011
[8] Olafsson Sigurdur, Li Xiaonan, and Wu Shuning."Operations research and data mining,in": European Journal of Operational Research 187 (2008) pp:1429–1448.
[9] Agrawal R., Imielinksi T. and Swami A. "Database mining: a performance perspective", IEEETransactions on Knowledge and Data Engineering 5 (6), (1993), pp: 914–925.
[10] Kannika Nirai Vaani.M, Ramaraj E "An integrated approach to derive effective rules from association rule mining using genetic algorithm", Pattern Recognition, Informatics and Medical Engineering (PRIME), 2013 International Conference, (2013), pp: 90–95.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Yours Advance Security Hood (Yash) |
Country | : | India |
Authors | : | Yashasvini Sharma |
: | 10.9790/0661-1318389 |
Abstract: Hacking of any online account is a practice done by an unauthorized person to get an access in an account of some other person. People says that usually person get hacked because of his own mistake of opening the password before the hackers, some of them are Social engineering, Shoulder surfing, Guessing, The hoax, Accessing your email account from cyber cafes (key logger). All the above mentioned topics occur because of either lack of awareness or due to a type of fraud. So people should be very careful all about it. But again there are techniques called as Dictionary attacks and Brute-fore attacks that are done by hackers directly and it is difficult to escape from it. It is clearly mentioned that- A perfect password does not exist; a hacker can crack any password if he has enough time and right "dictionary" or "brute force tools".
[1 ] Y. He and Z. Han, "User Authentication with Provable Security against Online Dictionary Attacks," J. Networks, vol. 4, no. 3, May 2009.
[2 ] N. Bohm, I. Brown, B. Gladman, Electronic Commerce: Who Carries the Risk of Fraud? 2000 (3) The Journal of Information, Law and Technology.
[3] International Journal of Network Security, Vol.8, Authentication Against Guessing Attacks in Ad. Hoc Networks. Books:
[1 ] Hacking Exposed: Network Security Secrets &Solutions, 5th Edi-tion by Stuart McClure, Joel Scambray and George Kurtz.
[2 ] Improving Web Application Security: Threats and Counter-measures,Mark Curphey. Conference Proceedings
[1] A. Narayanan and V. Shmatikov, "Fast Dictionary Attacks on Human-Memorable Passwords Using Time-Space Tradeoff," Proc.ACM Computer and Comm. Security (CCS ‟05)
. [2 ] B. Pinkas and T. Sander, "Securing Passwords against Dictionary Attacks," Proc. ACM Conf. Computer and Comm. Security (CCS ‟02). Generic Website
[1] Password cracking - Wikipedia, the free encyclopedia [2 ] Brute force attack http://www.mandylionlabs.com/ PRCCalc/ BruteForceCalc.htm (Accessed date:28-Aug-2012)
[3 ] Dictionary attacks http://www.cryptosmith.com/node/231 (Ac-cessed date: 02-sep-2012)
[4] http://www.dummies.com/how-to/content/a-case-study-in-how-hackers-use-windows-password-v.html www.iosrjournals.org
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Accurate estimation of ridge orientation is a crucial step in fingerprint image enhancement because the performance of a minutiae extraction algorithm and matching heavily relies on the quality of the input fingerprint images, which in turn relies on proper estimation of ridge orientation. When ridge structure is not clear in a noisy fingerprint image, ridge orientation estimation becomes tough and it is considered as one of the most challenging tasks in fingerprint image enhancement process. A new methodology based on neural network approach and Ternarization for ridge orientation estimation is proposed in this paper. In the present work a trained Back Propagation Neural Network (BPNN) is used for accurate ridge orientation. The advantage with the Ternarization process is that it eradicates the false ridge orientation for which the neural network wrongly responds with the larger value and at the same time, it keeps the correct ridge orientation blocks intact without making them blur. This helps in qualitative extraction of minutiae points from the fingerprint image. The experimental results have shown that the proposed method for estimating ridge orientation works far better than the traditional gradient based approach.
Keywords: Feature Vector, Minutiae Extraction, Neural Network, Ridge orientation, Ternarization,
[1] Zhu E., Yin J.P., Zhang G.M., Fingerprint Matching Based on Global Alignment of Multiple Reference Minutiae, Pattern Recognition, 2005, 38(10): 1685-1694.
[2] Zhu E., Yin J.P., Zhang G.M., Hu C.F., Fingerprint Minutia Relationship Representation Proceedings of the 5th WSEAS Int. Conf. on Signal Processing, Robotics and Automation, Madrid, Spain, February 15-17, 2006 (pp158-164) and Matching Based on Curve Coordinate System, International Journal of Image and Graphics, 2005, 5(4): 729-744.
[3] Kawagoe M., Tojo A., Fingerprint Pattern Classification, Pattern Recognition, 1984, 17(3): 295-303.
[4] Mehtre B.M., Murthy N.N., Kapoor S., Segmentation of Fingerprint Images Using the Directional Image, Pattern Recognition, 1987, 20(4): 429-435.
[5] Hung D.C.D., Enhancement and Feature Purification of Fingerprint Images, Pattern Recognition, 1993, 26(11): 1661-1672.
[6] En Zhu, Jian-Ping Yin, Guo-Min Zhang, Chun-Feng Hu., ―Fingerprint Ridge Orientation Estimation Based on Neural Network‖,in ISPRA'06 Proc. 5th WSEAS Int. Conference on Signal Processing, Robotics and Automation,2006,pp.158-164.
[7] Sherlock B., Monro D.,―A Model for Interpreting Fingerprint Topology‖, Pattern Recognition, vol.26, no.7, pp.1047-1055,1993.
[8] Vizcaya P., Gerhardt L., ―A Nonlinear Orientation Model for Global Description of Fingerprints‖,Pattern Recognition, 1996, vol. 29, no. 7, pp.1221-1231.
[9] Araque J., Baena M., Chalela B., Navarro D., Vizcaya P., ―Synthesis of Fingerprint Images‖,in Proc. Int. Conf. on Pattern Recognition, 2002,vol. 2,pp. 422-425.
[10] Zhu E., Yin J. P., Hu C. F., Zhang G. M., ―Quality estimation of fingerprint image based on neural network‖, in Proc. Of International Conference on Natural Computing, LNCS 3611, (2005) pp.65-70.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Topic-specific Web Crawler using Probability Method |
Country | : | India |
Authors | : | S. Subatra Devi, Dr. P. Sheik Abdul Khader |
: | 10.9790/0661-131102106 |
Abstract: Web has become an integral part of our lives and search engines play an important role in making users search the content online using specific topic. The web is a huge and highly dynamic environment which is growing exponentially in content and developing fast in structure. No search engine can cover the whole web, but it has to focus on the most valuable pages for crawling. Many methods have been developed based on link and text content analysis for retrieving the pages. Topic-specific web crawler collects the relevant web pages of interested topics of the user from the web. In this paper, we present an algorithm that covers the link, text content using Levenshtein distance and probability method to fetch more number of relevant pages based on the topic specified by the user. Evaluation illustrates that the proposed web crawler collects the best web pages under user interests during the earlier period of crawling.
Keywords - Levenshtein Distance, Hyperlink, Probability Method, Search engine, Web Crawler.
[1] T.H. Have;owa;a Topic-Sensitive PageRamk, Proceedings of the 11th World Wide Web conference, pp.517-526.
[2] Blaž Novak, Survey of focused web crawling algorithms, in Proceedings of SIKDD, pp. 55-58, 2004.
[3] Shalin shah, Spe 2006. Implementing an Effective Web Crawler. Pant, G., Srinivasan, P., Menczer, F., Crawling the Web. Web Dynamics: Adapting to Change in Content, Size, Topology and Use, edited by M. Levene and A. Poulovassilis, Springer- verlog, pp: 153-178, November 2004.
[4] Debashis Hati and Amritesh Kumar, An Approach for Identifying URLs Based on Division Score and Link Score in Focused Crawler, International Journal of Computer Applications, Vol. 2, no. 3, May 2010.
[5] A. Rungsawang, N. Angkawattanawit, Learnable topic-specific web crawler. Journal of Network and Computer Applications, Issue no:28,page no:97-114,2005
[6] Michael Hersovici, Michal Jacovi, Yoelle S. Maarek, Dan Pelleg, Menachem Shtalhaim and Sigalit Ur, The shark-search algorithm. An application: tailored Web site mapping, in Proceedings of the Seventh International World Wide Web Conference on Computer Networks and ISDN Systems, Vol. 30, no. 1-7, pp. 317-326, April 1998.
[7] P. De Bra, G-J Houben, Y. Kornatzky, and R. Post, Information Retrieval in Distributed Hypertexts, in the Proceedings of RIAO'94, Intelligent Multimedia, Information Retrieval Systems and Management, New York, NY, 1994.
[8] S. Chakrabarti, M. van den Berg, and B. Dom, Focused Crawling: A New Approach for Topic-Specific Resource Discovery, In Proc. 8th WWW, 1999.
[9] A. Rungsawang, N. Angkawattanawit, Learnable Crawling: An Efficient Approach to Topic-specific Web Resource Discovery, 2005.
[10] K.Bharat and M.Henzinger, Improved Algorithms for Topic Distillation in a Hyperlinked Environment, In proc. Of the ACM SIGIR '98 conference on Research and Development in Information Retrieval.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Assessing Buffering with Scheduling Schemes in a QoS Internet Router |
Country | : | Nigeria |
Authors | : | Onadokun I. O., Oladeji F. A. |
: | 10.9790/0661-131107113 |
Abstract: A key requirement for service differentiation as required in Internet of the future and QoS to work effectively is the extension of traffic management routines of the current TCP/IP protocol. Two among of such traffic functions are the introduction of differential packet buffering and multi-queue scheduling algorithms at the routers. Different propositions have been made to extend the Internet best-effort service model but they are yet to be incorporated into the protocol as they are still subjected to experimentations. This paper examines priority, round robin and weighted round robin scheduling algorithms that could be used in a multi-queue platform and simulate them with RIO-C penalty enforcement buffering scheme to determine which one could improve the network performance in terms of loss rate. From the simulation analyses, priority discipline ranked first with scheduler drop rate of 29.46% and a RED loss rate of 14.95%. Round Robin ranked second with 29.53% and 10.50% scheduler and RED loss rates respectively. Weighted round robin ranked third with 30.28% and 3.04% scheduler and RED loss rates respectively. With these results, it was observed that a network that desires feasible quality of service implementation could adopt admission control based on RIO-C with priority scheduling algorithm. Keywords: Loss rate, Priority scheduling, QoS, RIO-C, TCP/IP
[1] R.Braden, D.Clark and S.Shenker Integrated Services in the Internet Architecture: an Overview, (IETF Draft, RFC.1633) 1994 Available:from:http:// en.scientificcommons.org/42772402.
[2] S.Blake, D. Black, M. Carlson, Z. Wang. And W. Weiss W. An Architecture for Differentiated Services, (IETF Draft, RFC 2474, 1998).
[3] D. Stiliadis, A. Varma 'Efficient Fair Queuing Algorithms for Packet Switched Networks', IEEE/ACM Transaction on Networking Vol. 6, 1998.
[4] R. Jain "Congestion Control in Computer Networks: Issues and Trends', IEEE Network Magazine, pp. 24-30, 1990.
- Citation
- Abstract
- Reference
- Full PDF
Abstract: Internet banking has gained wide acceptance internationally and seems to be fast catching up in India with more and more banks entering the fray. online banking is defined as the use of Internet as a remote delivery channel of banking system services via the World Wide Web. This system enable customers to access their accounts and general information of bank products and services anywhere anytime i.e the model of banking has transformed from brick and mortar to all pervading through 'Anywhere and Anytime Banking' through PC or other intelligent device using web browser software, such as Netscape Navigator or Microsoft internet Explorer or Firefox. But Online banking continues to present challenges to financial security and personal privacy. Billions of financial data transactions occur online every day and bank cyber crimes take place every day when bank information is compromised by skilled criminal hackers by manipulating a financial institution's online information system.This causes huge financial loses to the banks and customers.So one of the major concerns of people with respect to internet banking is the safety related to data of bank account, transactional information and also the access path of their accounts.The paper starts from the security problems Internet banking are facing,tries to explain suitable set of controls which consists of policies, procedures, organisational structures, hardware and software functions organisation has to establish , tries to explore various of Technology and Security Standards the RBI is recommending to banks for safe internet banking and analyses the current representative of the online banking security controls and measures with the case of ICICI Bank of INDIA.
Keywords : Online Banking,Security threats,Security measures,RBI
[1]. Paul Jeffery Marshall.Online Banking: Information Security vs. Hackers Research Paper,in International Journal of Scientific &
[2]. Engineering Research, Volume 1, Issue 1, Oct2010.
[3]. Zakaria Karim,Karim Mohammed Rezaul, Aliar Hossain.Towards Secure Information Systems in Online Banking.
[4]. Internet banking in India,http://tips.thinkrupee.com/articles/internet-banking-in-india.php S. Laforet and X. Li. Consumers' attitudes towards online and mobile banking in China. International Journal of Bank Marketing, vol. 23, no.5, 2005, pp. 362-380.
[5]. Y. Zhu . How to strengthen Internet banking security management.Modern Finance, no. 10, 2006, pp. 32.
[6]. Hossein Jadidoleslamy. Designing a New Security Architecture for Online-Banking: A Hierarchical Intrusion Detection Architecture and Intrusion Detection System.The Computing Science and Technology International Journal , vol. 2, no. 2, June, 2012.
[7]. Damein Hutchinson,Matthew warren. ,Security for Internet banking:A Framework. Logistic Information Management Vol 16,Number 1,2003 ,pp 64-73.
[8]. M. Mannan and P.C. van Oorschot. Security and Usability: The Gap in Real-World Online Banking. New Security Paradigms Workshop,2007.
[9]. Y. Nie and R. Huang. The risks and control of the Internet banking. Market Modernization, no 8, 2004, pp. 34-35.
[10]. Y. Huang. The research of Internet banking risk prevention strategy. Contemporary Finance, no. 4, 2008, pp. 44-45.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Lossless LZW Data Compression Algorithm on CUDA |
Country | : | India |
Authors | : | Shyni K., Manoj Kumar K. V. |
: | 10.9790/0661-131122127 |
Abstract: Data compression is an important area of information and communication technologies it seeks to reduce the number of bits used to store or transmit information. It will efficiently utilizes the memory spaces and allows to transmit data within a limited bandwidth. Most compression process is achieved by removing data redundancy while preserving information content. Data compression algorithms exploit some characteristics to make the compressed data smaller than the original data. Every data compression process is working with well defined algorithm. Data compression on graphics processors (GPUs) has become an effective approach to improve the performance of main memory. CUDA is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance with graphics processing unit (GPU).Data compression algorithms on CUDA provides better compression process. In this paper, we implement the most power full algorithm LZW on CUDA architecture. Due to the parallel characteristics of GPU, compression process time is very less than the CPU environment.
Keywords – Cuda, Gpu, Lzss, Lzw, Lzo.
[1] David Salomon,"Data Compression The Complete Reference",Third Edition, Department of Computer ScienceCalifornia State University, USA.
[2] Mark Nelson and Jean-loupGailly, "Data Compression Techniques ",College of Applied Studies, University of Bahrain. [3] Wenbin Fang, Bingsheng He, Qiong Luo."Database Compression on Graphics Processors". Hong Kong University of Science and Technology. Websites:
[4] "NVIDIA CUDA Compute Unified Device ArchitectureProgramming Guide" www.nvidia.com, 2012.
[5] M. F. X. J Oberhumer "LZO source code" www.oberhumer.com/opensource/lzo Proceedings Papers:
[6] Adnan Ozsoy, Martin Swany, "CULZSS: LZSS Lossless Data Compression on CUDA," Department of Computer & Information Sciences University of Delaware Newark, DE 19716, 2011 IEEE International Conference on Cluster Computing.
[7] L. Erdődi, "File compression with LZO algorithm using NVIDIA CUDA architecture"LINDI 2012 • 4th IEEE International Symposium on Logistics and Industrial Informatics • September 5-7, 2012; Smolenice, Slovakia.
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | An Enhanced ILD Diagnosis Method using DWT |
Country | : | India |
Authors | : | Viji P. S., Jayasree M. |
: | 10.9790/0661-131128133 |
Abstract: Interstitial Lung Disease (ILD) is a group of lung diseases affecting lung parenchyma. Since the lesions can be clearly identified in the CT scan of the lung, CT analysis is best among the ways for the identification of ILD by the pathologists. The automated detection of any disease from images uses the same fact that they are diagnosed by the medical professionals by exploiting the appearance features of the image of organ under consideration. Here we are applying the same to detect Lung Diseases from CT images. Segmenting the arteries and veins from the image is one of the major steps in analyzing the medical image. We need an edge enhanced lung image to do the same, so that the vascular tree can be clearly identified. In this implementation, discrete wavelet transform is applied for edge enhancement followed by the dynamic range compression. Wavelet edge enhancement and vessel enhancement filtering comprises the first stage of the algorithm. Vessel enhancement filtering uses the Eigen values of the Hessian of the image. The second stage of the algorithm corresponds to the feature extraction and classification. Feature extraction is done from the co-occurrence matrix of the resulting vessel segmented image. The co-occurrence features of the image forms the input feature vector for fuzzy SVM classifier. The performance of the proposed scheme is evaluated for accuracy.
Keywords – CAD, Wavelet edge enhancement, DWT, Fuzzy SVM, ILD diagnosis, PSO thresholding, Image classifcication, Dynamic Range Compression.
[1] Z. A. Aziz, A. U. Wells, D. M. Hansell, S. J. Copley, S. R. Desai, S. M.Ellis, , F. V. Gleeson, S. Grubnic, A. G. Nicholson, S. P. G. Padley, K. S.Pointon, J. H. Reynolds, R. J. H. Robertson, and M. B. Rubens, "HRCTdiagnosis of diffuse parenchymal lung disease: Inter-observer variation,"Thorax, vol. 59, no. 6, pp. 506–511, 2004.
[2] I. C. Sluimer, P. F. van Waes, M. A. Viergever, and B. van Ginneken,"Computer-aided diagnosis in high resolution CT of the lungs," Med.Phys., vol. 30, no. 12, pp. 3081–3090, Jun. 2003.
[3] V. A. Zavaletta, B. J. Bartholmai, and R. A. Robb, "High resolution multidetector CT-aided tissue analysis and quantification of lung fibrosis,"Acad. Radiol., vol. 14, no. 7, pp. 772–787, Jul. 2007.
[4] K. Marten, V. Dicken, C. Kneitz, M. Hoehmann, W. Kenn, D. Hahn, and C. Engelke, "Computer-assisted quantification of interstitial lung disease associated with rheumatoid arthritis: Preliminary technical validation,"Eur. J. Radiol., vol. 72, no. 2, pp. 278–83, Aug. 2009.
[5] Korfiatis, C. Kalogeropoulou, A. Karahaliou, A. Kazantzi, S. Skiadopoulos,and L. Costaridou, "Texture classification-based segmentation of lung affected by interstitial pneumonia in high-resolution CT," Med.Phys., vol. 35, no. 12, pp. 5290–5302, Dec. 2008.
[6] H. Shikata, G. McLennan, E. A. Hoffman, M. Sonka, "Segmentation of pulmonary vascular trees from thoracic 3-d ct images," J. Biomed.Imag., p. 636240, Dec. 2009
[7] K. Krissian, G. Malandain, N. Ayache, R. Vaillant, and Y.Trousset,"Model-based detection of tubular structures in 3-d images," Comput.Vis. Image Und., vol. 80, no. 2, pp. 130–171, 2000.
[8] C. Zhou, H. P. Chan, B. Shahiner, L. M. Hadjiiski, A. Chughtai, S. Patel,J. Wei, J. Ge, P. N. Cascade, and E. A. Kazerooni, "Automatic multiscale enhancement and segmentation of pulmonary vessels in CT pulmonary angiography images for CAD applications," Med. Phys., vol. 34, no. 12,pp. 4567–4577, Nov. 2007.
Proceedings Papers:
[9] P. Korfiatis, A. Karahaliou, A. Kazantzi, C. Kalogeropoulou, and L. Costaridou, "Towards quantification of interstitial pneumonia patterns in lung multidetector CT," IEEE Trans. Inf. Technol. Biomed., vol. 14,no. 7, pp. 675–680, May 2010.
[10] P. Lo, B. van Ginneken, and M. de Bruijne, "Vessel tree extraction using locally optimal paths," in Proc. IEEE Int. Symp. Biomed. Imag.: NanoMacro, 2010, pp. 680–683
- Citation
- Abstract
- Reference
- Full PDF
Paper Type | : | Research Paper |
Title | : | Procuring the Anomaly Packets and Accountability Detection in the Network |
Country | : | India |
Authors | : | V. Laxman, Ms. P. Subhadra |
: | 10.9790/0661-131134137 |
Abstract: It is software that will be used to find the anomaly packets in Voice over Internet Protocol (VoIP) devices, such as soft phones and VoIP gateways to the Session Initiation Protocol specifications, and to test the compliance and interoperability of VoIP equipment produced by different manufacturers. Network traffic is often "different" from benign traffic in ways that can be distinguished without knowing the nature of the attack. We describe a two stage anomaly detection system for identifying suspicious traffic. First, we filter traffic to pass only the packets of most interest, e.g. the first few packets of incoming server requests. Second, we model the most common protocols (IP, TCP, telnet, FTP, SMTP, HTTP) at the packet byte level to flag events (byte values) that have not been observed for a long time. Different software's are available on the market to conduct a compliance and interoperability validation phase. However, they often have features limited to packet capturing and decoding, or they are simulation tools that often require a complex developing phase to define the behavior of each test. The proposed tool, instead, can be inserted into an Session Initiation Protocol (SIP) network and is capable of observing and finding, in an automatic way. It executes in three phases. 1. SIP messages flowing in the network are captured. 2. In charge of grouping SIP messages into transactions and dialog. 3. Operates by comparing the message flow with a set of predefined rules. These Rules are classified into two groups. 1. Static Rules have been obtained by the direct analysis of SIP specifications. 2. Dynamic Rules have been obtained by experience with SIP compliance and interoperability testing. If some rules failed during verification, an output is reported by indicating the rule that failed and a list of possible fault causes.
Index Terms- Protocol, IDS, Anomaly..
[1] Bell, Timothy, Ian H. Witten, John G. Cleary, "Modeling for Text Compression", ACM Computing Surveys (21)4, Dec.1989 ,pp. 557-591.
[2] Barbará, D., N. Wu, S. Jajodia, "Detecting Novel Network Intrusions using Bayes Estimators", First SIAM International Conference on Data Mining, 2001,
[3] Floyd, S. and V. Paxson, "Difficulties in Simulating the Internet." To appear in IEEE/ACM Transactions on Networking, 2001. http://www.aciri.org/vern/papers.html
[4] Forrest, S., S. A. Hofmeyr, A. Somayaji, and T. A. Longstaff, "A Sense of Self for Unix Processes", Proceedings of 1996 IEEE Symposium on Computer Security and Privacy. ftp://ftp.cs.unm.edu/pub/forrest/ieee-sp-96-unix.pdf
[5] Lippmann, R., et al., "The 1999 DARPA Off-Line Intrusion Detection Evaluation", Computer Networks 34(4) , 2000,579-595.
[6] Mahoney, M., P. K. Chan, "PHAD: Packet Header Anomaly Detection for Identifying Hostile Network Traffic", Florida Tech. technical report 2001-04, http://cs.fit.edu/~tr/
[7] Mahoney, M., P. K. Chan, "Learning Models of Network Traffic for Detecting Novel Attacks", Florida Tech. technical report 2002-08, http://cs.fit.edu/~tr/
[8] Mahoney, M., P. K. Chan, "Learning Non stationary Models of Normal Network Traffic for Detecting Novel Attacks ", Edmonton, Alberta: Proc. SIGKDD, 2002, 376-385.
[9] Paxson, Vern, "Bro: A System for Detecting Network Intruders in Real-Time", Lawrence Berkeley National Laboratory Proceedings, 7'th USENIX Security Symposium, Jan. 26-29, 1998, San Antonio TX,
[10] F. Wang, M. Hamdi, and J. Muppala, "Using Parallel DRAM to Scale Router Buffers," IEEE Trans. Parall and Distributed Systems, vol. 20, May 2009, pp. 710-724.
- Citation
- Abstract
- Reference
- Full PDF
Abstract:In this paper, a new Cryptosystem based on block cipher has been proposed where the encryption is done through Modified Forward Backward Overlapped Modulo Arithmetic Technique (MFBOMAT). The original message is considered as a stream of bits, which is then divided into a number of blocks, each containing n bits, where n is any one of 2, 4, 8, 16, 32, 64, 128, 256. The first and last blocks are then added where the modulus of addition is 2n. The result replaces the last block (say Nth block), first block remaining unchanged (Forward mode). In the next attempt the second and the Nth block (the changed block) are added and the result replaces the second block(Backward mode).Again the second (the changed block) and the (N-1)th block are added and the result replaces the (N-1)th block (Forward mode).The modulo addition has been implemented in a very simple manner where the carry out of the MSB is discarded to get the result. The technique is applied in a cascaded manner by varying the block size from 2 to 256. The whole technique has been implemented by using a modulo subtraction technique for decryption.
Keywords: FBOMAT, Symmetric block cipher, Cryptosystem
[1] Rajdeep Chakraborty, Debajyoti Guha and J. K. Mandal, "A Block Cipher Based Cryptosystem Through Forward Backward Overlapped Modulo Arithmetic Technique (FBOMAT)", published in International Journal of Engineering & Science Research Journal (IJESR), ISSN 2277 2685, accepted & published in Volume 2 – Issue 5 (May 2012) ,Article number 7,pp-349 – 360. Emai rajdeep_chak@indiatimes.com, guha_debajyoti@yahoo.com, jkmandal@sancharnet.in,
[2] W. Stallings, Cryptography and Network Security: Principles and Practices, Prentice Hall, Upper Saddle River, New Jersey, USA, Third Edition, 2003.
[3] Atul Kahate, Cryptography and Network Security, TMH, India,2nd Ed,2009
[4] Behroz Forouzan, Cryptography and Network Security, TMH, India, 4th Ed,2010