devops Questions

With help of Oracle Virtual Box provider, how we can create our own customized vagrant box with in-house tools?

Am working in a old project which has many hg subrepos which seems to be problematic. Am planning to remove subrepos and move to main repository. Is there any better approach to do the same or any tools? Also want to maintain the same metadata and not interested to add all subrepo as single commit to main repository.

Hello,

I have a number of Linux shell scripts for automation of particular operations. For example: merging SVN revisions from one branch to other, update statuses of JIRA tickets, make an Oracle eBS build from sources stored in SVN or git etc. 

I'm trying to use these automation scripts for different infrastuctures and customers. But I cannot share the script code for mentioned customers. Is it possible to put automation shell scripts in cloud and call these scripts from different infrastuctures? Are some available technical solutions how to call Linux shell scripts from cloud?

For example in cloud I could have shell function svn_merge() with parameters. But this script could be called from different infras with different parameters like "svn_merge <parameters>".

I will be happy to see any suggestions, any links to any free or commercial solutions.

 

Thanks in advance!

Arturs

 

Dear Experts,

Need your expter advice in setting up a new repository structure for our product development.

We have a software product, with 15-compnents and each of the 15-components have at least 15-modules.

Earlier our admin guy had configured the Stash server for us.

The current setup is like this:
Our IT engineer had created 15-Projects in stash/bitbucket server for 15-Components
For each of the 15-modules in a given component, a individual repository was created

Thus we have around 225 + 25(for additional module), totally 250 repositories to manage.

Individual developers creates their feature branches on their respective repositories and merge them to their production branch in their respective repo, after release.

Feature branches gets created only on a particular module in a particular repository which is getting affected for that feature.

If a developer is working on a feature, which is affecting 10-modules, this developer creates a branch in each of the repository which is affectting this module. Thus he creates 10-branches for a given feature in 10-different repositories.

In this way, we have many small teams working on many individual repositories corresponding to their module. Packages are made from each of the individual repository and delivered to our infra team, who deploys the product on our hosted servers.

Since source is spread all over the place, there is no baselines/tags created for this product. Also managing these many repositories by a single CM admin is too much to handle.

As a CM admin, I’m thinking of suggesting this structure for this product.
Create a single Project (Project A) in Stash 
Create a single repository (Repo A ) in this project 
Create a folder for each Component, i.e 15-Folders, under the repository root 
(i.e Project A - /RepoA/Component1/Module1 Module2 ……Module15
/RepoA/Component2/Module1 Module2 ……Module15
/RepoA/Component3/Module1 Module2 ……Module15
:
:
/RepoA/Folder15/Module1 Module2 ……Module15)

In this way we’ll have single repository to handle and CM admin will create branches and developers will just make changes and commit their code.

Appreciate your response on these lines:

1. Is my suggestion to host a single repository for all the components correct? 
is this achievable, what is your opinion on this repository structure?

2. We are developing a multi-tier, web application using PHP, Python, and Java Script. This application gets deployed on multiple servers. By using this single repository structure, do you see any issue or obstacle at a later stage on the line of development? What precautions should we take, if any?

3. Since the application is multi-tier, developers actually develop the application in the development environment (where multiple servers are setup to create a production sort of environment) using the shared work area in the environment, at times there are chances of one developer overwriting the others' changes.

What is your opinion on this kind of development?

4. Using this repository structure, how do we resolve the conflicts? Whenever a developer tries to push/pull the changes, he/she might face multiple conflicts not only from their changes. Should we call all developers to assemble at one desk and resolve the conflicts or is there any other better way?

5. We have many set of features getting developed on different branches. At times, many components are not modified in the some branches, still we just package them and release. What is your advice on this?

As CM admin, I see lots of advantages in managing branches and merges in a single repository structure. What do you suggest for our kind of development.

I have read many articles on web about the advantages/disadvantages of having single/multiple repositories. My Dev team is not conveienced with my approach.  Thus I need answers to all my(actually their's) question in one place.

Would appreciate your detailed response for each of the above issues that we are facing.  Your assistance in this regard will be very much appreciated.

Eagerly awaiting your response.

 

Thanking you,

Deepak.

By Paul Perry - December 18, 20151 Answer

Hello Everyone,

I'd like to understand how companies using SVN, TFS, Git, GitHub perform backing out production deployments from two perspectives, your re-deployment method but more importantly, how are you indicating in your respective repo that if code was rolled back, what are you doing internally with your SCM tool to force a message to developers that code was backed out.

Thanks

Paul Perry

Hello All ,

I am looking for a solution on how can we monitor the traffic received on UDP ports ( preferably through Nagios)

As per our setup , we are using Data Diode for highly secure unidirectional flow . There is a backend third party application which is sending data and it comes through data diode to the top layer on specific UDP port ( dual node active-active setup for high availability ) .

Now sometimes we fall in the situation when one of the data diode leg stop working which ultimately impacts the overall  data flow .

As per current situation , I am using a customised shell script which uses TCPDUMP on UDP ports to ensure we are receiving data on both leg of Data daiode . The script use to monitor the TCPDUMP traffic every 15 min and if no data received in last 15 min then sleep for 10 min and check again.

If no data received while second check then raise an alarm via Nagios .

Please let me know if we can use more optimised solution to monitor this setup .

 

Thanks in Advance !!

 

Hello ,

I am new to Ansible and trying to setup for automating deployment process .

I have written a simple playbook to execute a shell script ( at remote host) on remote host using command module .

=============================================

---

- name: start server.

  hosts: app

  remote_user: app

 

  tasks:

 

     - name: Start APP Service

       command: /home/app/current-app/etas/util_bin/start_app

       register: comm_out

     - debug: msg="{{ comm_out.stdout }}"

     - debug: msg="{{ comm_out.stderr }}"

==============================================

But it seems that its not able to load the libraries on remote host and fails to start the application process , however the ansible-playbook command line exist with success status .

 

Error :

===================

/home/app/current-app/etas/bin/eta_registry: error while loading shared libraries: libapp_r64.so.1: cannot open shared object file: No such file or directory\n/home/app/current-app/etas/bin/eta_log_manager: error while loading shared libraries: libapp_r64.so.1: cannot open shared object file: No such file or directory\

 

======================

 

Please help on this .

 

Thanks in Advance!!

 

 

 

How have others solved the resourcing justification for Build / Release engineers? Based on number of supported developers? number of tools? number of deployments? are there base line ratios or other industry standards based on specific products or supported platforms?

Hi All,

I am aware of the fact that application servers (both Enterprise and Dev-op tools) can be configured to generate critical data files such as server log files for instance. Data files can also be generated at various levels within an application server. This may vary between different software vendors, but its certainly possible to generate data files in different formats.

But, companies usually have an infrastructure that consists of many different types of application servers that may be interconnected via LAN, Internet, Load balanced etc., It becomes important to capture the critical real time data from the data files and visualize the status of servers or KPIs (especially in Production environments).

So, I would like to know if there are any data visualization tools that might generate graphs based on these data files (as input)? If so, can you please list them? And briefly explain their pros and cons?

Appreciate your help!

Thanks,

Pradeep

   

Hi,

 

Is there any SCM tool which allows to define dependency between any 2 artifacts within the project.

Technically this is possible with some tools like Maven & Make but in case of any 2 word documents can this be defined using SCM tools?

 

Thanks

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.