The biggest software development companies conduct daily more than hundreds deployments which influence currently operating IT (Information Technology) systems. This is possible due to the availability of automatic mechanisms which are providing their functional testing and later applications deployment. Unfortunately, nowadays, there are no tools or even a set of good practices related to the problem on how to include IT security issues into the whole production and deployment processes. This paper describes how to deal with this problem in the large mobile telecommunication operator environment.
Defects affect the properties and behavior of the casting during its service life. Since the defects can occur due to different reasons, they
must be correctly identified and categorized, to enable applying the appropriate remedial measures. several different approaches for
categorizing casting defects have been proposed in technical literature. They mainly rely on physical description, location, and formation
of defects. There is a need for a systematic approach for classifying investment casting defects, considering appropriate attributes such as
their size, location, identification stage, inspection method, consistency, appearance of defects. A systematic approach for categorization of
investment casting defects considering multiple attributes: detection stage, size, shape, appearance, location, consistency and severity of
occurrence. Information about the relevant attributes of major defects encountered in investment casting process has been collected from
an industrial foundry. This has been implemented in a cloud-based system to make the system freely and widely accessible.
The article presents a method for 3D point cloud segmentation. The point cloud comes from a FARO LS scanner – the device creates a dense point cloud, where 3D points are organized in the 2D table. The input data set consists of millions of 3D points – it makes widely known RANSAC algorithms unusable. We add some modifi cations to use RANSAC for such big data sets.
The process of designing and creating an integrated distributed information system for storing digitized works of scientists of research institutes of the Almaty academic city is analyzed. The requirements for the storage of digital objects are defined; a comparative analysis of the open source software used for these purposes is carried out. The system fully provides the necessary computing resources for ongoing research and educational processes, simplifying the prospect of its further development, and allows to build an advanced IT infrastructure for managing intellectual capital, an electronic library that is intended to store all books and scientific works of the Kazakhstan Engineering Technological University and research institutes of the Almaty academic city.
This paper deals with a methodology for the implementation of cloud manufacturing (CM) architecture. CM is a current paradigm in which dynamically scalable and virtualized resources are provided to users as services over the Internet. CM is based on the concept of coud computing, which is essential in the Industry 4.0 trend. A CM architecture is employed to map users and providers of manufacturing resources. It reduces costs and development time during a product lifecycle. Some providers use different descriptions of their services, so we propose taking advantage of semantic web technologies such as ontologies to tackle this issue. Indeed, robust tools are proposed for mapping providers’ descriptions and user requests to find the most appropriate service. The ontology defines the stages of the product lifecycle as services. It also takes into account the features of coud computing (storage, computing capacity, etc.). The CM ontology will contribute to intelligent and automated service discovery. The proposed methodology is inspired by the ASDI framework (analysis–specification–design–implementation), which has already been used in the supply chain, healthcare and manufacturing domains. The aim of the new methodology is to propose an easy method of designing a library of components for a CM architecture. An example of the application of this methodology with a simulation model, based on the CloudSim software, is presented. The result can be used to help the industrial decision-makers who want to design CM architectures.
The idea of using the Cloud of Things is becoming more critical for e-government, as it is considered to be a useful mechanism of facilitating the government’s work. The most important benefit of using the Cloud of Things concept is the increased productivity that the e-governments would achieve; which eventually would lead to significant cost savings; which in turn would have a highly anticipated future impact on egovernments. E-government’s diversity goals face many challenges; trust is one of the major challenges that it is facing when deploying the Cloud of Things. In this study, a new trust framework is proposed which supports trust with the Internet of Things devices interconnected to the cloud; to support the services that are provided by e-government to be delivered in a trusted manner. The proposed framework has been applied to a use case study to ensure its trustworthiness in a real mission. The results show that the proposed trust framework is useful to ensure achieving a trusted environment for the Cloud of Things for it to continue providing and gathering the data needed for the services that are offered by users through E-government.
This paper proposes an advanced routing method in the purpose of increasing IoT routing device’s power-efficiency, which allows to centralize routing tables computing as well as to push loading, related to routing tables computation, towards the Cloud environment at all. We introduced a phased solution for the formulated task. Generally, next steps were performed: stated requirements for the system with Cloud routing, proposed possible solution, and developed the whole system’s structure. For a proper study of the efficiency, the experiment was conducted using the developed system’s prototype for real-life cases, each represents own cluster size (several topologies by each size), used sizes are: 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27 and 29. Expectable results for this research – decrease the time of cluster’s reaction on topology changes (delay, needed to renew routing tables), which improves system’s adaptivity.
The paper aims at the higher reactive power management complexity caused by the access of distributed power, and the problem such as large data exchange capacity, low accuracy of reactive power distribution, a slow convergence rate, and so on, may appear when the controlled objects are large. This paper proposes a reactive power and voltage control management strategy based on virtual reactance cloud control. The coupling between active power and reactive power in the system is effectively eliminated through the virtual reactance. At the same time, huge amounts of data are treated to parallel processing by using the cloud computing model parallel distributed processing, realize the uncertainty transformation between qualitative concept and quantitative value. The power distribution matrix is formed according to graph theory, and the accurate allocation of reactive power is realized by applying the cloud control model. Finally, the validity and rationality of this method are verified by testing a practical node system through simulation.
Recently, Google Earth Engine (GEE) provides a new way to effectively classify land cover utilizing available in-built classifiers. However, there have a few studies on the applications of the GEE so far. Therefore, the goal of this study is to explore the capacity of the GEE platform in terms of land cover classification in Dien Bien Province of Vietnam. Land cover classification in the year of 2003 and 2010 were performed using multiple-temporal Landsat images. Two algorithms – GMO Max Entropy and Classification and Regression Tree (CART) integrated into the Google Earth Engine (GEE) plat-form – were applied for this classification. The results indicated that the CART algorithm performed better in terms of mapping land use. The overall accuracy of this algorithm in the year of 2003 and 2010 were 80.0% and 81.6%, respective-ly. Significant changes between 2003 and 2010 were found as an increase in barren land and a reduction in forest land. This is likely due to the slash-and-burn agricultural practice of ethnic minorities in the province. Barren land seems to occur more at locations near water sources, reflecting the local people’s unsuitable farming practice. This study may provide use-ful information in land cover change in Dien Bien Province, as well as analysis mechanisms of this change, supporting en-vironmental and natural resource management for the local authorities.
The problem of performing software tests using Testing-as-a-Service cloud environment is considered and formulated as an~online cluster scheduling on parallel machines with total flowtime criterion. A mathematical model is proposed. Several properties of the problem, including solution feasibility and connection to the classic scheduling on parallel machines are discussed. A family of algorithms based on a new priority rule called the Smallest Remaining Load (SRL) is proposed. We prove that algorithms from that family are not competitive relative to each other. Computer experiment using real-life data indicated that the SRL algorithm using the longest job sub-strategy is the best in performance. This algorithm is then compared with the Simulated Annealing metaheuristic. Results indicate that the metaheuristic rarely outperforms the SRL algorithm, obtaining worse results most of the time, which is counter-intuitive for a metaheuristic. Finally, we test the accuracy of prediction of processing times of jobs. The results indicate high (91.4%) accuracy for predicting processing times of test cases and even higher (98.7%) for prediction of remaining load of test suites. Results also show that schedules obtained through prediction are stable (coefficient of variation is 0.2‒3.7%) and do not affect most of the algorithms (around 1% difference in flowtime), proving the considered problem is semi-clairvoyant. For the Largest Remaining Load rule, the predicted values tend to perform better than the actual values. The use of predicted values affects the SRL algorithm the most (up to 15% flowtime increase), but it still outperforms other algorithms.
The research was aimed at analysing the factors that affect the accuracy of merging point clouds when scanning over longer distances. Research takes into account the limited possibilities of target placement occurring while scanning opposite benches of quarries or open-pit mines, embankments from opposite banks of rivers etc. In all these cases, there is an obstacle/void between the scanner and measured object that prevents the optimal location of targets and enlarging scanning distances. The accuracy factors for cloud merging are: the placement of targets relative to the scanner and measured object, the target type and instrument range. Tests demonstrated that for scanning of objects with lower accuracy requirements, over long distances, it is optimal to choose flat targets for registration. For objects with higher accuracy requirements, scanned from shorter distances, it is worth selecting spherical targets. Targets and scanned object should be on the same side of the void.
Terrestrial laser scanner (TLS) is a new class of survey instruments to capture spatial data developed rapidly. A perfect facility in the oil industry does not exist. As facilities age, oil and gas companies often need to revamp their plants to make sure the facilities still meet their specifications. Due to the complexity of an oil plant site, there are difficulties in revamping, having all dimensions and geometric properties, getting through narrow spaces between pipes and having the description label of each object within a facility site. So it is needed to develop an accurate observations technique to overcome these difficulties. TLS could be an unconventional solution as it accurately measures the coordinates identifying the position of each object within the oil plant and provide highly detailed 3D models. This paper investigates creating 3D model for Ras Gharib oil plant in Egypt and determining the geometric properties of oil plant equipment (tank, vessels, pipes . . . etc.) using TLS observations and modeling by CADWORX program. The modeling involves an analysis of several scans of the oil plant. All the processes to convert the observed points cloud into a 3D model are described. The geometric properties for tanks, vessels and pipes (radius, center coordinates, height and consequently oil volume) are also calculated and presented. The results provide a significant improvement in observing and modeling of an oil plant and prove that the TLS is the most effective choice for generating a representative 3D model required for oil plant revamping.