Debugging Microservices with Pandas, PySpark using Actuators and Logs at Runtime
DOI:
https://doi.org/10.26438/ijcse/v10i7.2730Keywords:
Microservice, Pandas, Spark, Actuator, SpringBoot, PyActuator, DataFramesAbstract
Microservices architecture is distributed in nature and the expectation is the services in the architecture must be highly available and responsive. Services in the architecture can scale from 1 to 100s and the distributed architecture is complex, and the chances of failure are higher when services communicate to each other. The main advantage of microservice architecture is we can easily mix technologies depending upon the nature of service, if the service is CPU or IO bound then we can develop the service based on the language or framework of our choice, similarly if we have hundreds of services in our architecture than we can build a proper debugging system for our microservices using any platform / frameworks two such libraries are Pandas or PySpark. This paper focuses on creating our own debugging tool in the Microservices architecture using python-based libraries PySpark and Pandas and the concept of Actuators.
References
[1] Badidi, E. (2013) “A Framework for Software-As-A-Service Selection and Provisioning”. In: International Journal of Computer Networks & Communications (IJCNC), 5 (3): 189-200, 2013.
[2] F. Montesi and J. Weber, “Circuit Breakers, Discovery, and API Gateways in Microservices,” ArXiv160905830 Cs, Sep. 2016
[3] Kratzke, N. (2015) “About Microservices, Containers and their Underestimated Impact on Network Performance”. At the CLOUD Comput. 2015, 180, 2015. https://arxiv.org/abs/1710.04049
[4] Kitchenham, B., Brereton, O. P., Budgen, D., Turner, M., Bailey, J., and Linkman, S. (2009). Systematic literature reviews in software engineering–a systematic literature review. Information and software technology, 51(1):7–15, 2009.
[5] Zimmermann, O. (2009). An architectural decision modeling framework for service oriented architecture design. PhD thesis, Universitat Stuttgart. 2009.
[6] Nick Pentreath, Machine Learning with Spark, Beijing, pp. 1-140, 2015.
[7] Bryant, P. G. and Smith, M (1995) Practical Data Analysis: Case Studies in Business Statistics. Homewood, IL: Richard D. Irwin Publishing: 1995.
[8] K. Petersen, S. Vakkalanka, and L. Kuzniarz. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology, 64:1–18, 2015.
[9] C. Wohlin. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, pages 38:1–38:10, New York, NY, USA, 2014. ACM
[10] C. Wohlin, P. Runeson, M. Host, M. Ohlsson, B. Regnell, ¨ and A. Wesslen. ´ Experimentation in Software Engineering. Computer Science. Springer, 2012.
[11] B. A. Kitchenham and S. Charters. Guidelines for performing systematic literature reviews in software engineering. Technical Report EBSE-2007-01, Keele University and University of Durham, 2007
[12] P. Kruchten. What do software architects really do? Journal of Systems and Software, 81(12), 2008
[13] Kornacker, M. et al. Impala: A modern, open-source SQL engine for Hadoop. In Proceedings of the Seventh Biennial CIDR Conference on Innovative Data Systems Research, Asilomar, CA, Jan. 4–7, 2015
[14] Isard, M. et al. Dryad: Distributed data-parallel programs from sequential building blocks. In Proceedings of the EuroSys Conference (Lisbon, Portugal, Mar. 21–23). ACM Press, New York, 2007.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors contributing to this journal agree to publish their articles under the Creative Commons Attribution 4.0 International License, allowing third parties to share their work (copy, distribute, transmit) and to adapt it, under the condition that the authors are given credit and that in the event of reuse or distribution, the terms of this license are made clear.
