300+ TOP Parallels Workstation for Windows and Linux Interview Questions [UPDATED]

  1. 1. Does Parallels Workstartion Virtualize Mac Osx Or Support Mac Osx As Host Os?

    Parallels Workstation does not virtualize Mac OSX because Mac OSX software requires special Apple Machine that has proprietary hardware compatible to its design. This cannot be emulated as well because emulating or virtualizing Mac OSX is illegal. Though you can virtualize Mac OSX under a Mac OSX as a host machine with proper Parallel virtualization solution such as Parallel Desktop on Mac.

  2. 2. What Are The Minimum Requirements Suggested For Parallels Workstation ?

    The minimum requirement that is suggested with Parallel Workstation are 

    • Minimum 2Gb of RAM is suggested but 4Gb is recommended.
    • Support for Intel VT-x & VT-d but Intel VT-x recommended.
    • Minimum 1.66 GHz x86 (32-bit) or x64 (64-bit) CPU.
    • OS supported as host OS are Windows OS & Linux.

  3. Linux Interview Questions

  4. 3. What Is Para-virtualization? Is It Supported By Parallels Workstation ?

    Para-virtualization is technique to present software interface to a virtual machine that is not actually like an underlying hardware but a virtual implementation of a hardware interface. Parallel Workstation supports the use of Para-virtualization. We can even virtualize a CPU using Para-virtualization.

  5. 4. What Do You Understand By Vt-x And Vt-d And Is It A Necessity For Parallels Workstation?

    VT-x is also known as Intel x86 virtualization that enables x86 CPU to share its resources to multiple virtual OS guests. This term is also referred to as Hardware Virtualization whereas, VT-d is a facility given by IOMMU (Input/output Memory Management Unit) that enables guest OS to access peripheral devices directly through DMA or interrupt mapping. This is also known as PCI Walk-through.


  6. Linux Tutorial

  7. 5. What Is The Feature That Emulates Legacy Hardware With Assistance Of Any Hardware Support?

    The Parallels Workstation provides with a technology called Controlled Native Execution that emulates legacy hardware such as CPUs as Intel 80386, Intel Pentium-M and primitive peripherals devices that are now obsolete but are sometimes used in system testing with old computer architecture.


  8. Linux Embedded systems Interview Questions

  9. 6. Can You Manage Parallels Workstations By A Centralized Server?

    A solution provided by Parallels named Parallels Management Console to manage your servers is a centralized framework to manage and control your custom virtual infrastructure. It also extends the control to your desktop virtual machines running under Parallels Workstation. This solution is mostly used in business infrastructure to implement cloud environment.

  10. 7. What Is Use Of Parallels Transporter?

    Parallels Transporter is a solution provided by Parallels to feature easy migration of virtual machines. It’s a very efficient and reliable tool for parallels virtualization solution that can take backup and restore virtual machines flexibly. Parallels Workstation also supports Parallels Transporter for judicial maintenance.


  11. Linux Embedded systems Tutorial
    System Administration Interview Questions

  12. 8. What Is Good Neighbor Policy?

    A good neighbor policy is a layman term coined for sandbox technique implemented by Parallels Workstation as it isolates environments for testing. Parallels Workstation lets you isolate virtual environments from the host and from one another to help ensure that testing or running background workloads doesn’t cause interference with others. This technique is used for malware analysis.

  13. 9. What Is Parallel Workstation Extreme And How Does It Differ From Parallel Workstation?

    Parallel Workstation Extreme is a more advance virtualization solution just like Parallels Workstation that is optimized for more power production in terms of graphics and processing. It delivers optimized architecture to suit business cases that need Cloud infrastructural requirements. This solution has a high end system requirement which qualifies for workstations and enterprise level servers. Extreme edition also supports facility to create Grid Computing and clusters that are very elastic and can be controlled remotely.


  14. Red Hat Linux Essentials Interview Questions

  15. 10. What Are Substitute To Parallels Workstation Solution?

    There are many substitutes to Parallels Workstation, since it is a Type-2 hypervisor that is running on resources of a Host OS. Market gives a wide range of such hypervisors such as VmWare Workstation, Oracle Virtual-Box, an open source alternative is Qemu. VmWare Workstation is very close competitor to Parallels Workstation as VmWare virtualization solutions were released much before Parallels was in the market.


  16. Microsoft Azure Tutorial

  17. 11. What All Host Os Can Run Parallel
    Workstation?

    There are number of OS that can deploy Parallels Workstation on them starting with Windows 7, Windows XP, Windows 2008 Server, Linux variants such as RedHat , OpenSuse, Ubuntu and many more. It should be noted that Mac OSX is not supported for Parallels Workstation. It is majorly compatible with Linux & Windows OS that also support a large list of Guest OS that it can virtualize.


  18. Red Hat Linux System Administration Interview Questions

  19. 12. Give Details Of Pcoip?

    Teradici PC-over-IP (PCoIP) is a remote graphics-acceleration solution that extends individual workstations to a remote thin client desktop solution. This feature is present in Parallels Workstation Extreme for efficient remote control over network. The technology depends more on speed of network and graphical power that is optimized according to the availability of resources.


  20. Linux Interview Questions

  21. 13. What Benchmarks Has Parallels Workstation Extreme Tested For Graphic Card Support?

    The benchmark for performing under high powered graphical support is an asset to Parallels Workstations. It not only qualifies ATI Firepro GL & Nvidia Quadro technology as an example but many more critical technologies. These qualifications are important to gain user confidence on the business solution and provide them with quality service with powerful architecture that is optimized for profit making.


  22. Parallel Algorithm Tutorial

  23. 14. Parallels Workstation Extreme Cpu Requirement Are Not Flexible? Is It True And Why?

    Yes, it is very true that Parallels Workstation Extreme does not support all the CPUs and is not at all suggested for deployment on Home based machines or PC. It supports minimum Intel Xeon 5500 & 5600 processors with series X58 or higher. This configuration can be easily acquired in Servers, starting from workstations.

  24. 15. Which Parallels Technology Will Be Suitable For Virtualization Solution On A Hp Z800 Workstation?

    Parallels offer wide variety of virtualization solutions that can very well be deployed on a machine like HP Z800 workstation. The most appropriate and optimized choice would be Parallels Workstation Extreme Edition that supports huge list of qualifications from different technologies that are compatible on it. It is graphically quite powerful and is ideal for business solutions.


  25. Microsoft Azure Interview Questions

300+ TOP Finite Element Analysis (FEA) Interview Questions [REAL TIME]

  1. 1. What Is The Finite Element Method (fem)?

    The FEM is a novel numerical method used to solve ordinary and partial differential equations. The method is based on the integration of the terms in the equation to be solved, in lieu of point discretization schemes like the finite difference method. The FEM utilizes the method of weighted residuals and integration by parts (Green-Gauss Theorem) to reduce second order derivatives to first order terms. The FEM has been used to solve a wide range of problems, and permits physical domains to be modeled directly using unstructured meshes typically based upon triangles or quadrilaterals in 2-D and tetrahedrons or hexahedrals in 3-D. The solution domain is discretized into individual elements – these elements are operated upon individually and then solved globally using matrix solution techniques.

  2. 2. What Is The History Of The Fem?

    Early work on numerical solution of boundary-valued problems can be traced to the use of finite difference schemes; South well used such methods in his book published in the mid 1940’s. The beginnings of the finite element method actually stem from these early numerical methods and the frustration associated with attempting to use finite difference methods on more difficult, geometrically irregular problems. Beginning in the mid 1950s,efforts to solve continuum problems in elasticity using small, discrete “elements” to describe the overall behavior of simple elastic bars began to appear, and such techniques were initially applied to the aircraft industry. Actual coining of the term “finite element”appeared in a paper by Clough in 1960. The early use of finite elements lay in the application to structural-related problems. However, others soon recognized the versatility of the method and its underlying rich mathematical basis for application in non-structural areas. Since these early works, rapid growth in usage of the method has continued since the mid 1970s. Numerous articles and texts have been published, and new applications appear routinely in the literature.


  3. AutoCAD Interview Questions

  4. 3. What Is The Method Of Weighted Residuals, I.e., Galerkin’s Method?

    The underlying mathematical basis of the finite element method first lies with the classical Rayleigh-Ritz and variational calculus procedures. These theories provided the reasons why the finite element method worked well for the class of problems in which variational statements could be obtained (e.g., linear diffusion type problems). However,as interest expanded in applying the finite element method to more types of problems, the use of classical theory to describe such problems became limited and could not be applied, e.g., fluid-related problems. Extension of the mathematical basis to non-linear and non-structural problems was achieved through the method of weighted residuals (MWR), originally conceived by Galerkin in the early 20th century. The MWR was found to provide the ideal theoretical basis for a much wider basis of problems as opposed to the Rayleigh-Ritz method. Basically, the method requires the governing  differential equation to be multiplied by a set of predetermined weights and the resulting product integrated over space; this integral is required to vanish. Technically, Galerkin’s method is a subset of the general MWR procedure, since various types of weights can be utilized; in the case of Galerkin’s method, the weights are chosen to be the same as the functions used to define the unknown variables. Most practitioners of the finite element method now employ Galerkin’s method to establish the approximations to the governing equations.

  5. 4. Why Should One Use Finite Elements?

    The versatility of the FEM, along with its rich mathematical formulation and robustness makes it an ideal numerical method for a wide range of problems. The ability to model complex geometries using unstructured meshes and employing elements that can be individually tagged makes the method unique. The ease of implementing boundary conditions as well as being able to use a wide family of element types is a definite advantage of the scheme over other methods. In addition, the FEM can be shown to stem from properly-posed functional minimization principles.


  6. Workplace Stress Tutorial

  7. 5. Can The Fem Handle A Wide Range Of Problems, I.e., Solve General Pdes?

    While the FEM was initially developed to solve diffusion type problems, i.e., stress-strain equations or heat conduction, advances over the past several decades have enabled the FEM to solve advection-dominated problems, including incompressible as well as compressible fluid flow. Modifications to the basic procedure (utilizing forms of upwinding for advection, i.e., Petrov-Galerkin and adaptive meshes) allow general advection-diffusion transport equations to be accurately solved for a wide range of problems.


  8. Solid Edge Interview Questions

  9. 6. What Is The Advantage Of The Fem Over Finite Difference (fdm) And Finite Volume
    (fvm) Methods?

    The major advantages of the FEM over FDM and FVM are its built-in abilities to handle unstructured meshes, a rich family of element choices, and natural handling of boundary conditions (especially flux relations). The FDM is generally restricted to simple geometries in which an orthogonal grid can be constructed; for irregular geometries, a global transformation of the governing equations (e.g., boundary fitted coordinates) must be made to create an orthogonal computational domain. Likewise, implementation of boundary conditions in FDM can be cumbersome. The FVM is an integral approach (typically with limits -0.5 to 0.5) similar to the FEM, with volumes being used instead of elements. The divergence theorem is used to establish the final equation set. Solutions are obtained at volume faces, vertices, or volume centers – some methods employ staggered grids. While FVM can handle irregular domains using unstructured grids (stemming from the FEM), the required averaging over the volume limits the method to second order spatial accuracy.

  10. 7. Is There Any Connection Between The Fem And The Boundary Element Method
    (bem)?

    In the BEM, one reduces the order of the problem by one, i.e., a two-dimensional domain is reduced to a line integral – a three-dimensional domain becomes a two-dimensional surface. The BEM only requires the discretization of the boundaries of the problem domain – no internal meshing is required, as in the FDM, FVM, and FEM schemes. The BEM requires two applications of the Green-Gauss Theorem (versus one in the FEM and employing Galerkin’s Method). The method is ideal for handling irregular shapes and setting boundaries that may extent to (near) infinity. One can place interior nodes within the BEM to obtain internal values easily. The BEM works quite effectively for linear differential equations – principally elliptic equations. However, if one desires to solve nonlinear advection-diffusion transport equations, the method becomes very cumbersome and computationally demanding – BEM matrices are dense, and do not readily permit efficient, sparse matrix solvers to be used as in the FEM.


  11. Mechanical Engineering Interview Questions

  12. 8. What Is Adaptivity, I.e., H-, P-, R-, And Hp-adaptation?

    Adaptivity is an active research area involving either remeshing or increased interpolation order during the solution process. The method is particularly effective in fluid flow, heat transfer, and structural analysis. The use of mesh refinement has been especially effective in aerodynamic simulations for accurately capturing shock locations in compressible flow. Generally, there are two types of adaptation: h-adaptation (mesh refinement), where the element size varies while the orders of the shape functions arekept constant; p-adaptation, where the element size is constant while the orders of the shape functions are increased (linear, quadratic, cubic, etc.). Adaptive remeshing (known as r-adaptation) employs a spring analogy to redistribute the nodes in an existing mesh -no new nodes are added; the accuracy of the solution is limited by the initial number of nodes and elements. In mesh refinement (h-adaptation), individual elements are subdivided without altering their original position. The use of hp-adaptation includes both h- and p-adaptation strategies and produces exponential convergence rates. Both mesh refinement and adaptive remeshing are now routinely used in many commercial codes. A spectral element is a special class of FEM that uses a series of orthogonal basis functions whereby the unknown terms are solved at selected spectral nodes; the method is stable and highly accurate, but can become time consuming.

  13. 9. How Difficult Is It To Write A Fem Program?

    Writing a FEM code is not terribly difficult, especially if one develops the code utilizing a general set of subroutines, e.g., input data, integration, assembly, boundary conditions,output, etc. About 90% of a FEM program is generic, which is fairly common among most FEM codes – they tend to use similar matrix solvers, quadrature rules, and matrix assembly procedures; I/O is usually the major difference among commercial FEM codes – some are easy, and some are not so easy to learn and use. A source listing of the FORTRAN codes can be found in the FORTRAN file folder; flow charts can be obtained from the authors. Likewise, MATLAB and MathCad files are also available. One of the best commercial packages now on the market is COMSOL, which also allows users to write their own solver packages and PDEs.


  14. Computational Fluid Dynamics Interview Questions

  15. 10. Are There Any Recommended Commercial Fem Packages That Are Versatile In
    Handling A Wide Range Of Problems?

    Any of the well known and widely versatile FEM codes now on the market are good – it just depends on how comfortable the user is with the I/O part of the program. COMSOL,as mentioned before, is quite easy and very versatile – handling a wide range of problem classes including fluid flow (with turbulence), heat transfer, structural analysis,electrodynamics, and general PDEs including species transport, chemical reactions, and groundwater/porous media flows.

  16. 11. Any Suggested Web Sites For Fem?

    There are several recommended web sites:

    • www.wiley.com/go/bhatti
    • http://dehesa.freeshell.org/FSEM
    • http://www.ncacm.unlv.edu
    • http://www.cfd-online.com/Resources/topics.html#fem

  17. Solid Works Interview Questions

  18. 12. How Long Does It Take For Me To Be Able To Use A Fem Program?

    Some programs allow you to solve problems fairly quickly. It is always highly recommended that work out the example problems generally provided by most commercial software. COMSOL, ANSYS, ALGOR, and NASTRAN all run on PCs.


  19. AutoCAD Interview Questions

  20. 13. Why Would I Want To Use A Fem Program?

    The versatility, ease of data input, and solution accuracy make the FEM one of the best numerical methods for solving engineering problems. FEM programs are the backbone of structural analyses, and are becoming more widely accepted for problems in which geometries are complex.

  21. 14. Is This A Method That Will Soon Become Obsolete?

    The recent introduction of BEM and meshless methods would appear to indicate the eventual obsolescence of the FEM. However, these newer methods are still years away from being developed to the point of wide spread applicability found in FEM. The FEM will be around for many years to come. Recent advances with the inclusion of spectral schemes and adaptivity make it especially attractive now.

  22. 15. How Expensive Is A Fem Code?

    FEM codes range from those that can be found for free on the web to others costing many thousands of dollars. Those that run on PCs are generally inexpensive, yet provide powerful tools for solving a number of large scale problems.


  23. Catia v5 Interview Questions

  24. 16. What Kind Of Hardware Do I Need To Run A Fem Code?

    A PC with a sufficiently fast processor, at least 256MB RAM, and at least 20GB of hard disk will permit many problems to be solved that once could only be run on mainframe computers. A suggested PC level for major FEM calculations is one with 1 GB RAM, 60 GB hard disk, and running with Pentium 4/3.2 GHz or better processors would provide more that adequate capabilities. The state-of-the-art in PC hardware is improving constantly; in a few years, even these suggested requirements will seem obsolete.

300+ TOP PeopleCode Interview Questions [UPDATED]

  1. 1. What Are Classes In Peoplecode?

    • A class is the formal definition of an object and acts as a template from which an instance of an object is created at runtime.
    • The class defines the properties of the object and the methods used to control the object’s behavior.
    • PeopleSoft delivers predefined classes (such as Array, File, Field, SQL, and so on).
    • You can create your own classes using the Application class. You can also extend the functionality of the existing classes using the Application class.
  2. 2. What Are Setid’s And Table Set Sharing?

    • SetId is the highest level key in the PeopleSoft. Location, Department and Jobcode tables are control tables and setid’s control the control tables during the transaction.
    • Table set sharing is a place where control tables are listed. It is accessed by business unit.

    EX: If we have two locations Arizona and Ohio with setid’s xyz and abc,
    suppose if we change Ohio’s setid to xyz then we can access all information related to Arizona/xyz like jobcodes etc.


  3. People Soft Interview Questions

  4. 3. What Is Auto Update In Peoplesoft?

    This record field property is used to update the date field of particular record with the server’s current date and time whenever a user creates or updates a row. Even the user enter the data into that field, the data which the user enters will be updated by the system’s current date and time.

  5. 4. Why Do Peoplesoft Often Use Views As Search Records?

    Search views are used for three main reasons
    .

    • Adding criteria to the search dialogue page
    • Providing row level security
    • Implementing search page processing
  6. 5. What Is Record Group? Which Records Can Be Included Into A Record Group?

    • Record group consists of records with similar functionality.
    • To setup a record in record group we should enter a set control field value in record properties.

  7. Manual Testing Interview Questions

  8. 6. What Are Metastrings Or Metasql?

    MetaStrings are special type of SQL expressions preceded by % sign.

    MetaStrings are used in the following:

    • SQLExec
    • In application designer to build dynamic views
    • With rowset object methods (select, fill)
    • SQL objects
    • Record class methods (Insert, Update)
    • Application Engine
    • Cobol
    • Scroll buffer functions (ScrollSelect and its relatives).
    • Some Record class methods (Insert, Update, and so on.
  9. 7. What Is The Difference Between Sqlexec And Createsql?

    • SqlExec means it bypasses the component buffer and it is directly contacts database to retrieve data.
    • But it retrieves the data row by row and not possible for bulk insert.
    • But in the case of Create SQL we can able insert the data in bulk.

  10. PeopleSoft Component Interface Interview Questions

  11. 8. What Is An Array In People Code?

    • An array is a collection of data storage locations, each of which holds the same type of data.
    • The maximum depth of a PeopleCode array is 15 dimensions.
    • Push and UnShift are the functions of the array used to add the elements into the array one from the end of the array and one from the beginning.
    • Pop is a function of array used to select and delete an element from the end of the array.
  12. 9. Hot To Store Output Of A Sql Query In A Variable Using Peoplecode?

    Using SQLExec Function

    SQLExec (“SELECT EMPLID,NAME FROM PS_PERSONAL_DATA”, &Emplid, &Name);


  13. Structured Query Report (SQR) Interview Questions

  14. 10. What Is Sub Page, Secondary Page In Peoplesoft?

    • A Sub Page is utilized where you want to display or capture similar information for various entities,
      for example, capturing an address, for a company or for a person, would need similar information like Street address,State county, Country,Pincode etc.. In those situations a sub page would be used, to design once and reuse at multiple places.
    • A Secondary Page is used to display or capture secondary information about an entity.A secondary page could use various sub pages, but the reverse is not true.
  15. 11. What Are The Different Ways We Can Set Up The Portal Security To Access Component In Portal?

    • Structure & content
    • Menu import
    • Register component

  16. PeopleSoft Application Engine Interview Questions

  17. 12. What Are The Rules Used By The System To Determine Whether A User Is Authorized To Update An Object?

    • The user should have the permission to update the object. This is given by the Definition security.
    • The group, which holds the object, should be added to the permission list of the user in update mode.

  18. People Soft Interview Questions

  19. 13. How To Give Access To The Records That Are To Be Used In A Query?

    To give access to the records that are to be used in query, we have create a new query security tree and add the records which we want to give the access and then assign a access group to the tree. After that we have to add that query tree and query access group to the permission list.

  20. 14. What Is The Use Of Primary Permission List In User Profile?

    Primary permission list is used for mass change and definition security purposes.

  21. 15. How To Populate Data Into Grid In Online?

    &Rs.Select or Scrollselect()


  22. PeopleSoft Security Interview Questions

  23. 16. What Are The Built-functions Used To Control Translate Values Dynamically?

    Adddropdownitem() Deletedropdownitem()

  24. 17. What Is Differed Processing And Its Advantages?

    Postpones some user actions to reduce the number of trips to the database so that increases the performance (in system edits, field edit, and field change).

    Advantages:

    • Reduces the network traffic.
    • Increases the performance.

  25. Peoplesoft Hrms Interview Questions

  26. 18. Can You Save The Component Programmatically?

    Using Dosave and Dosavenow functions.


  27. Manual Testing Interview Questions

  28. 19. What Is Difference Between Getrowset And Createrowset In People Code?

    • Getrowset –is used to get rowset for a record in the component buffer.
    • Createrowset—is used to create rowset for a record which in database, and is also called a Standalone rowset
  29. 20. What Is An Array In People Code? What Is Maximum Dimension Of An Array? Which Function Inserts Values Into An Array? What Is “pop”?

    • An array is a collection of data storage locations, each of which holds the same type of data.
    • The maximum depth of a PeopleCode array is 15 dimensions.
    • Push and unshift are the functions of the array used to add the elements into the array one from the end of the array and one from the beginning. Pop is a function of array used to select and delete an element from the end of the array.
  30. 21. What Is Getlevel0()? What Is The Use Of %subrec And %selectall Functions?

    • Getlevel0()— used to get the rowset of the level0.
    • %subrec    — is used only in Dynamic View SQL where it expands to the columns of a subrecord:
    • %selectall–%SelectAll is shorthand for selecting all fields in the specified record, wrapping date/time fields with %DateOut, %TimeOut.
  31. 22. What Is Difference Between Saveprechange And Savepostchange? Which Function Directly Interacts With The Database?

    • Saveprechange—last event that executes before updating the data from component buffer to the database.
    • Savepostchange –fires after the updation of data in the database.
    • SQLEXEC — function directly interacts with the database.
  32. 23. What Is Difference Between Field Default And Row Init?

    • Field default specifies only the default value for a field when we are in Add mode.
    • Row init fires only when a row of data coming from database to component buffer
  33. 24. What Is Default Processing?

    • In default processing, any blank fields in the component are set to their default value.
    • You can specify the default value either in the Record Field Properties, or in FieldDefault PeopleCode

  34. PeopleSoft Component Interface Interview Questions

  35. 25. What Are Different Variables In People Code And Their Scope?

    • System variables and User defined variables.
    • Scope
    • Global
    • Component
    • Local
  36. 26. When We Select A Component What Events Will Be Fired?

    • If default mode for component is search mode:  only searchinit will fired.
    • If default mode for component is new mode :field default, field formula, rowinit, searchinit.
  37. 27. What Databuffer Classes Are Available In People Code?

    • Rowset
    • Row
    • Record
    • Field
    • Array
    • File
    • Sql
    • chart
    • grid and so on.

  38. Structured Query Report (SQR) Interview Questions

  39. 28. What Is The Difference Between Component Buffer And Data Buffer?

    Component buffer contains all the data of the active component. Data buffer contains the data other than the data in the component buffer (Data of other records)

  40. 29. What Is The Purpose Of The Sqlexec Function? What Are Its Benefits And Draw Backs?

     SQLEXEC is used to execute the sql statements(select,insert,update,delete). We can get only one row at a time.

  41. 30. Is There Any Way By Which You Can Find Out Whether The User Is In Add Mode Or Update Mode?

    • %mode
      Returns A—for Add mode.
      Returns U — for Update mode
  42. 31. In Which Events Error & Warning Are Used Most Extensively?

    • Field edit
    • Save edit
    • Search save
    • row delete
    • row insert
  43. 32. What Are Think Time Functions?

    Think-time functions suspend processing either until the user has taken some action (such as clicking a button in a message box), or until an external process has run to completion.

  44. 33. Differentiate Field Edit And Save Edit?

    In Field edit for each field change, a transition to the application server to the database is taken place. In Saveedit for all the fields , only one transition to the application server to the Database is taken place.


  45. PeopleSoft Application Engine Interview Questions

  46. 34. What Is Pia And What Are Its Components?

    It is n-tier architecture. We have client, web server, application server and Database server. We have jolt and tuxedo. We have WSL, WSH, JSL, JSH, QUEUES and services. In database server we have system tables, peopletools tables and application tables.

  47. 35. How Can A Component Have More Than One Search Record? Give A Situation?

    You might want to reuse the same component multiple times with different search records. You can accomplish this by overriding the component search record at run-time when the component is opened from a menu item without creating separate copies of the component.

    The component override is temporary, and occurs only when the component is opened from the menu item in which the override is set. It does not change the component definition.

  48. 36. What Is An Expert Entry?

    Expert entry enables a user to change from interactive to deferred mode at runtime for appropriate transactions


  49. PeopleSoft Security Interview Questions

  50. 37. Can You Hide A Primary Page In A Component? Reason?

    No we can not hide the primary page of a component. If the component had only one page then by making this page also invisible we wont have any component existing so we are not allowed to hide the primary page.

  51. 38. Can You Place Sub Page Into Grid? If Yes How?

    Yes we can insert subpage using insert subpage. After insert subpage into main page, drag the subpage into the grid. When we save the page we are successfully able to save the page showing that we can insert a subpage into a grid.

  52. 39. What Conditions Are Required To Establish Parent Child Relationship Between Two Records?what Are The Advantages With That?

    Conditions are:

    • The child record should have all the key fields of parent record and at least one more key field other than the key fields of parent record.
    • We should mention the parent record in the record properties of child record.
    • We can not go for more than three levels of parent/child relationships.

    Advantages are:

    • To have referential integrity.
    • No need to enter information again and again
  53. 40. What Are Table Edits?

    We have prompt table edit, yes/no table edit, translate table edit as the table edits.


  54. Peoplesoft Hrms Interview Questions

  55. 41. In Case Of Record Level Audit What Is The Structure Of Table?

    The structure of the table in record level audit is: AUDIT_OPRID, AUDIT_STAMP, AUDIT_ACTN, AUDIT_RECNAME and can add fields from record.

  56. 42. What Types Of Audits Are Supported By People Soft?

    We have field level audit and record level audit.

  57. 43. Which Effective Dated Rows Can Be Retrieved In Update/display Mode, Update/display All And Correction Mode?

    • Update/display: can view current and future rows. Can update only future rows.
    • Update/display all: can view history, current and future rows. Can update only future rows.
    • Correction: can view and update history, current and future rows.
  58. 44. What Is The Difference Between Key And Alternate Search Key?

    • KEY-It is the primary key of the record. Can be used as search key or need not be.
    • Alternate search key-it is used for searching purposes.

300+ [REAL TIME] NoSQL Interview Questions

  1. 1. What Are The Pros And Cons Of Graph Database?

    Pros:

    • Graph databases seem to be tailor-made for networking applications. The prototypical example is a social network, where nodes represent users who have various kinds of relationships to each other. Modeling this kind of data using any of the other styles is often a tough fit, but a graph database would accept it with relish.
    • They are also perfect matches for an object-oriented system. 

    Cons

    • Because of the high degree of interconnectedness between nodes, graph databases are generally not suitable for network partitioning.
    • Graph databases don’t scale out well. 
  2. 2. What Is Impedance Mismatch In Database Terminology?

    It is the difference between the relational model and the in memory data structures. The relational data model organizes data into a structure of tables and rows, or more properly, relations and tuples  In the relational model, a tuple is a set of name value pairs and a relation is a set of tuples. All operations in SQL consume and return relations, which leads to the mathematically elegant relational algebra.

    This foundation on relations provides a certain elegance and simplicity, but it also introduces limitations. In particular, the values in a relational tuple have to be simple—they cannot contain any structure, such as a nested record or a list. This limitation isn’t true for in memory data structures, which can take on much richer structures than relations. As a result, if you want to use a richer in memory data structure, you have to translate it to a relational representation to store it on disk. Hence the impedance mismatch—two different representations that require translation


  3. Python Interview Questions

  4. 3. What Is “polyglot Persistence” In Nosql?

    In 2006, Neal Ford coined the term polyglot programming, to express the idea that applications should be written in a mix of languages to take advantage of the fact that different languages are suitable for tackling different problems. Complex applications combine different types of problems, so picking the right language for each job may be more productive than trying to fit all aspects into a single language.

    Similarly, when working on an e­commerce business problem, using a data store for the shopping cart which is highly available and can scale is important, but the same data store cannot help you find products bought by the customers’ friends—which is a totally different question. We use the term polyglot persistence to define this hybrid approach to persistence. 

  5. 4. Say Something About Aggregate ­oriented Databases?

    An aggregate is a collection of data that we interact with as a unit. Aggregates form the boundaries for ACID operations with the database. Key value, document, and column family databases can all be seen as forms of aggregate oriented database. Aggregates make it easier for the database to manage data storage over clusters.

    Aggregate oriented databases work best when most data interaction is done with the same aggregate; aggregate ignorant databases are better when interactions use data organized in many different formations. Aggregate oriented databases make inter aggregate relationships more difficult to handle than intra aggregate relationships. They often compute materialized views to provide data organized differently from their primary aggregates. This is often done with map reduce computations.


  6. Python Tutorial

  7. 5. What Is The Key Difference Between Replication And Sharding?

    • Replication takes the same data and copies it over multiple nodes. Sharding puts different data on different nodes 
    • Sharding is particularly valuable for performance because it can improve both read and write performance. Using replication, particularly with caching, can greatly improve read performance but does little for applications that have a lot of writes. Sharding provides a way to horizontally scale

  8. Data Mining Interview Questions

  9. 6. Explain About Cassandra Nosql?

    Cassandra is an open source scalable and highly available “NoSQL” distributed database management system from Apache. Cassandra claims to offer fault tolerant linear scalability with no single point of failure. Cassandra sits in the Column­Family NoSQL camp.The Cassandra data model is designed for large scale distributed data and trades ACID compliant data practices for performance and availability.Cassandra is optimized for very fast and highly available writes.Cassandra is written in Java and can run on a vast array of operating systems and platform.

  10. 7. Explain How Cassandra Writes?

    Cassandra writes first to a commit log on disk for durability then commits to an in­memory structure called a memtable. A write is successful once both commits are complete. Writes are batched in memory and written to disk in a table structure called an SSTable (sorted string table). Memtables and SSTables are created per column family. With this design Cassandra has minimal disk I/O and offers high speed write performance because the commit log is append­only and Cassandra doesn’t seek on writes. In the event of a fault when writing to the SSTable Cassandra can simply replay the commit log.


  11. Data Mining Tutorial
    Hadoop Interview Questions

  12. 8. Explain Cassandra Data Model?

    The Cassandra data model has 4 main concepts which are cluster, key space, column, column family. Clusters contain many nodes (machines) and can contain multiple key spaces. A key space is a namespace to group multiple column families, typically one per application. A column contains a name, value and timestamp. A column family contains multiple columns referenced by a row keys.

  13. 9. What Is Flume?

    Flume is an open source software program developed by Cloud era that acts as a service for aggregating and moving large amounts of data around a Hadoop cluster as the data is produced or shortly thereafter. Its primary use case is the gathering of log files from all the machines in a cluster to persist them in a centralized store such as HDFS.I n Flume, we create data flows by building up chains of logical nodes and connecting them to sources and sinks. For example, say we wish to move data from an Apache access log into HDFS. You create a source by tailing access log and use a logical node to route this to an HDFS sink.


  14. Scalable Vector Graphics Interview Questions

  15. 10. What Are The Modes Of Operation That Flume Supports?

    Flume supports three modes of operation: single node, pseudo distributed, and fully distributed. Single node is useful for basic testing and getting up and running quickly Pseudo distributed is a more production like environment that lets us build more complicated flows while testing on a single physical machine Fully distributed is the mode that run in for production. The fully distributed mode offers two further sub modes: a standalone mode with a single master and a distributed mode with multiple masters.


  16. Hadoop Tutorial

  17. 11. What Is Jaql ?

    Jaql is a JSON­based query language that translates into Hadoop MapReduce jobs. JSON is the data interchange standard that is humanreadable like XML but is designed to be lighter weight. Jaql programs are run using the Jaql shell. We start the Jaql shell using the jaqlshell command. If we pass no arguments, we start it in interactive mode. If we pass the ­b argument and the path to a file, we will execute the contents of that file as a Jaql script.

    Finally, if we pass the ­e argument, the Jaql shell will execute the Jaql statement that follows the ­e. There are two modes that the Jaql shell can run in: The first is cluster mode, specified with a ­c argument. It uses the Hadoop cluster if we have one configured. The other option is minicluster mode, which starts a minicluster that is useful for quick tests. The Jaql query language is a data­flow language.


  18. Apache Pig Interview Questions

  19. 12. What Is Hive?

    Hive can be thought of as a data warehouse infrastructure for providing summarization, query and analysis of data that is managed by Hadoop.Hive provides a SQL interface for data that is stored in Hadoop.And, it implicitly converts queries into MapReduce jobs so that the programmer can work at a higher level than he or she would when writing MapReduce jobs in Java. Hive is an integral part of the Hadoop ecosystem that was initially developed at Facebook and is now an active Apache open source project.


  20. Python Interview Questions

  21. 13. What Is Impala?

    Impala is a SQL query system for Hadoop from Cloudera. The Cloudera positions Impala as a “real-time” query engine for Hadoop and by “real­time” they imply that rather than running batch oriented jobs like with MapReduce, we can get much faster query results for a certain  types of queries using Impala over an SQL based front­end.It does not rely on the MapReduce infrastructure of Hadoop, instead Impala implements a completely separate engine for processing queries. So this engine is a specialized distributed query engine that is similar to what you can find in some of the commercial pattern related databases. So in essence it bypasses MapReduce.


  22. Apache Pig Tutorial

  23. 14. What Is Bigsql?

    Big Data is a culmination of numerous research and development projects at IBM. So IBM has taken the work from these various projects and released it as a technology preview called Big SQL.

    IBM claims that Big SQL provides robust SQL support for the Hadoop ecosystem:

    • it has a scalable architecture
    • it supports SQL and data types available in SQL ’92, plus it has some additional capabilities
    • it supports JDBC and ODBC client drivers
    • it has efficient handling of “point queries”

    Big SQL is based on a multi­threaded architecture, so it’s good for performance and the scalability in a Big SQL environment essentially depends on the Hadoop cluster itself that is its size and scheduling policies.

  24. 15. How Big Sql Works?

    The Big SQL engine analyzes incoming queries. It separates portions to execute at the server versus the portions to be executed by the cluster. It rewrites queries if necessary for improved performance; determines the appropriate storage handle for data; produces the execution plan and executes and coordinates the query.

    IBM architected Big SQL with the goal that existing queries should run with no or few modifications and that queries should be executed as efficiently as the chosen storage mechanisms allow. And rather than build a separate query execution infrastructure they made Big SQL rely much on Hive, so much of the data manipulation language, the data definition language syntax, and the general concepts of Big SQL are similar to Hive. And Big SQL shares catalogues with Hive via the Hive metastore.Hence each can query each other’s tables.


  25. Scala Interview Questions

  26. 16. What Is Data Wizard?

    A Data Wizard is someone who can consistently derive money out of data, e.g. working as an employee, consultant or in an other capacity, by providing value to clients or extracting value for himself, out of data. Even a guy who design statistical models for sport bets, and use his strategies for himself alone, is a data wizard.Rather than knowlege, what makes a data wizard successful is craftsmanship, intuition and vision, to compete with peers who share the same knowledge but lack these other skills.


  27. Scala Tutorial

  28. 17. What Is Apache Hbase ?

    Apache HBase is an open source columnar database built to run on top of the Hadoop Distributed File System (HDFS). Hadoop is a framework for handling large datasets in a distributed computing environment.HBase is designed to support high table­update rates and to scale out horizontally in distributed compute clusters. Its focus on scale enables it to support very large database tables e.g. ones containing billions of rows and millions of columns.


  29. HBase Interview Questions

  30. 18. List Out The Features Of Bigsql?

    IBM claims that Big SQL provides robust SQL support for the Hadoop ecosystem:

    • it has a scalable architecture;
    • it supports SQL and data types available in SQL ’92, plus it has some additional capabilities;
    • it supports JDBC and ODBC client drivers;
    • it has efficient handling of “point queries” (and we’ll get to what that means);
    • there are a wide variety of data sources and file formats for HDFS and HBase that it supports;

    ­ And it, although is not open source, it does interoperate well with the open source ecosystem within Hadoop.


  31. Data Mining Interview Questions

  32. 19. List Some Drawbacks And Limitations Associated With Hive?

    1. The SQL syntax that Hive supports is quite restrictive. So for example, we are not allowed to do sub­queries, which is very very common in the SQL world. There is no windowed aggregates, and also ANSI joins are not allowed. And in the SQL world there are a lot of other joins that the developers are used to which we cannot use with Hive.
    2. The other restriction that is quite limiting is the data types that are supported, for example when it comes to Varchar support or Decimal support, Hive lacks quite severely
    3. When it comes to client support the JDBC and the ODBC drivers are quite limited and there are concurrency problems when accessing Hive using these client drivers.

  33. HBase Tutorial

  34. 20. In Ravendb What Does The Below Statement Does? Using (var Ds = New Documentstore { Url = “http://localhost:8080”, Defaultdatabase = “cruddemo”
    }.initialize())

    As a first step, we are using the Document Store class that inherits from the abstract class Document  Store Base. The Document Store class is manages access to Raven DB and open sessions to  work with Raven DB. The Document Store class needs a URL and optionally the name of the database. Our Raven DB server is running at 8080 port (at the time of installation we did so). Also we specified a Default Database name which is CRUD Demo here. The function Initialize() initializes the current instance. 


  35. MongoDB Interview Questions

  36. 21. What Is Rss (rich Site Summary)?

    RSS (Rich Site Summary; originally RDF Site Summary; often called Really Simple Syndication) uses a family of standard web feed formats to publish frequently updated information: blog entries, news headlines, audio, video. An RSS document (called “feed”, “web feed” or “channel”) includes full or  summarized text, and metadata, like publishing date and author’s name.RSS is purely a  semi structured/un­structured document data 

  37. 22. What Are The Drawbacks Of Impala?

    • Impala isn’t a GA offering yet. So as a beta offering, it has several limitations in terms of functionality and capability; for example, several of the data sources and file formats aren’t yet supported.
    • lso ODBC is currently the only client driver that’s available, so if we have JDBC applications we are not able to use them directly yet.
    • Another Impala drawback is that it’s only available for use with Cloudera’s distribution of Hadoop; that is CDH 4.1

  38. MongoDB Tutorial

  39. 23. List Some Benefits Of Impala?

    1. One of the key ones is low latency for executing SQL queries on top of Hadoop. And part of this has to do with bypassing the MapReduce infrastructure which involves significant overhead, especially when starting and stopping JBMs. 
    2. Cloudera also claims several magnitudes of improvement in performance compared to executing the same SQL queries using Hive.
    3. Another benefit is that if we really wanted to look under the hood at what Cloudera has provided in Impala or if we wanted to tinker with the code, the source code is available for you to access and download. 

  40. Scalable Software Interview Questions

  41. 24. What Is Redis?

    Redis is an advance Key­Value store, open source, NoSQL database which is primarily use for building highly scalable web applications. Redis holds its database entirely in memory and use the disk only for persistence. It has a rich set of data types like String, List, Hashes, Sets and Sorted Sets with range queries and bitmaps, hyper logs and geospatial indexes with radius queries. It finds is use where very high write and read speed is in demand


  42. Hadoop Interview Questions

  43. 25. In Case Of Mongodb, What Is The Advantage Of Representing The Data In Bson Format As Opposed To Json?

    It is primarily because of the following reasons ­:

    • Fast machine scan­ability
    • More availability of data types in BSON as opposed to JSON
    • BSON brings more strongly typed system as compared to JSON . BSON is compatible to the Native data structures of languages like C#, Java, Python etc.
  44. 26. What Are The Various Categories On Nosql?

    The various categories on NOSQL :

    • Key­Value Store Database
    • Column Family Database
    • Document Store Database
    • Graph Database
    • Multivalue Database
    • Object Database
    • Tripple Store Database
    • Tuple Store Database
    • Tabular Database 
  45. 27. Give An Example Of Inserting Bulk Records To Redis In C#?

    Let us first create a model :

    public class Student
    {
    public int StudentID { get; set; }
    public string StudentName { get; set; }
    public string Gender { get; set; }
    public string DOB { get; set; }
    }

    Next create Redis Connector:

    public class RedisConnector
    {
    static IDatabase GetRedisInstance()
    {
    return
    ConnectionMultiplexer
    .Connect(“localhost”)
    .GetDatabase();
    }
    }


  46. Scalable Vector Graphics Interview Questions

  47. 28. What Is Connectionmultiplexer?

    The connection to Redis is handled by the ConnectionMultiplexer class which is the central object in the StackExchange.Redis namespace. The ConnectionMultiplexer is designed to be shared and reused between callers.

    e.g.

    static IDatabase GetRedisInstance()
    {
    return
    ConnectionMultiplexer
    .Connect(“localhost”)
    .GetDatabase();
    }

    The GetDatabase() of the ConnectionMultiplexer sealed class obtains an interactive connection to a database inside redis.

  48. 29. List Out Some Of The Features Of Redis?

    Some of the Redis features are ­:

    • LRU (Less Recently Used) Eviction
    • Messaging broker implementation via Publisher Subscriber Support
    • Disk Persistence
    • Automatic Failover
    • Transaction
    • Redis HyperLogLog
    • Redis Lua Scripting
    • Act as database
    • Act as a cache
    • Provides high availability via Redis Sentinel
    • Provides Automatic partitioning with Redis Cluster
    • Provides different levels of on­disk persistence 
  49. 30. What Is Graph Database?

    This kind of NoSQL database fits best in the case where in a connected set of all nodes,edges satisfy a given predicate, starting from a given node.A classic example may be any social engineering site.
    Examples : Neo4j etc.

  50. 31. Which Feature(s) Mongodb Has Removed In­order To Retain Scalability?

    Since MongoDB needs to maintain a huge chunk of collection, it cannot use the traditional Joins and Transactions across multiple collections (tables in RDBMS). This brings the scalability into the system. 

     

  51. 32. Which Data Types Available In Bson?

    BSON supports all the types like Strings, Floating-point, numbers, Objects (Subdocuments),Timestamps, Arrays but it does not support Complex Numbers

  52. 33. By Default, Which Database Does Mongodb Connect To?

    By default, the database that mogodb connects to is test.

    e.g.

    C:Userssome.user>mongo
    MongoDB shell version: 3.2.4
    connecting to: test
    >


  53. Apache Pig Interview Questions

  54. 34. What Is Cons?

    • Because of the high degree of interconnectedness between nodes, graph databases are generally not suitable for network partitioning.
    • Graph databases don’t scale out well.
  55. 35. What Are The Cons Of A Traditional Rdbs Over Nosql Systems?

    • The object-relational mapping layer can be complex.
    • Entity-relationship modeling must be completed before testing begins, which slows development.
    • RDBMSs don’t scale out when joins are required.
    • Sharding over many servers can be done but requires application code and “will be operationally inefficient.
    • Full-text search requires third-party tools.
    • It can be difficult to store high-variability data in tables.
  56. 36. What Is Eventual Consistency In Nosql Stores?

    Eventual consistency means eventually, when all service logic is executed, the system is left in a consistent state. This concept is widely used in distributed systems to achieve high availability. It informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.

    In NoSQL systems, the eventual consistent services are often classified as providing BASE (Basically Available, Soft state, Eventual consistency) and in RDMS, it is classified as ACID (Availability, Consistency, Isolation and Durability). Leading NoSQL databases like Riak, Couchbase, and DynamoDB provide client applications with a guarantee of “eventual consistency”. Others, like MongoDB and Cassandra are eventually consistent in some configurations.


  57. Scala Interview Questions

  58. 37. What Is Cap Theorem? How Is It Applicable To Nosql Systems?

    Eric Brewer posted the CAP theorem in early 2000.

    In it he discusses three system attributes within the context of distributed databases as follows:

    1. Consistency:
      The notion that all nodes see the same data at the same time.
    2. Availability:
      A guarantee that every request to the system receives a response about whether it was successful or not.
    3. Partition Tolerance:
      A quality stating that the system continues to operate despite failure of part of the system.

    The common understanding around the CAP theorem is that a distributed database system may only provide at most 2 of the above 3 capabilities. As such, most NoSQL databases cite it as a basis for employing an eventual consistency model with respect to how database updates are handled.

  59. 38. Explain The Transaction Support By Using Base In Nosql Systems?

    ACID properties of RDMS seem crucial but these seem to pose some roadblocks for larger systems in terms of availability and performance. NoSQL provides an alternative to ACID called BASE.

    BASE means:

    • Basic Availability
    • Soft state
    • Eventual consistency

    Most NoSQL databases do not provide transaction support by default, which means the developers have to think how to implement transactions. Many NoSQL stores offers transactions at the single document (or row etc.) level. For example, In MongoDB, a write operation is atomic on the level of a single document, even if the operation modifies multiple embedded documents within a single document.

    Since a single document can contain multiple embedded documents, single-document atomicity is sufficient for many practical use cases. For cases where a sequence of write operations must operate as if in a single transaction, you can implement a two-phase commit in your application. It’s harder to develop software in the fault-tolerant BASE compared to the ACID, but Brewer’s CAP theorem says you have no choice if you want to scale up.

  60. 39. What Is The Architectural Difference Between Applications Supporting Rdms And Nosql Systems?

    RDBMS systems traditionally support ACID transactions at the database level, which results in easier application development. On the other side, in a NoSQL system, most of the transactions are being handled at the application level. The application developer can easily abuse the implementation by making wrong choices. Fundamentally, it requires more stringent processes to create NoSQL application.

    On the contrary side, NoSQL system scale well in high load environments. You can apply automatic sharding to minimize down time and the nodes can be prepared in real time, which results in lower operational costs. With RDBMS system, it requires a lot of proactive strategy to maintain and meet the scalability demands. At times, it becomes operationally inefficient to meet the sudden high demands.

  61. 40. What Is Database Sharding? How Does It Help In Minimizing The Downtime?

    Sharding is a type of database partitioning, which divides the large databases into smaller and easily available chunks called shards. In RDBMS, it is widely known as horizontal partitioning. It’s basically splitting and maintaining the database by rows rather than columns.

    As the amount of data an organization stores increases and when the amount of data needed to run the business exceeds the current capacity of the environment, some mechanism for breaking the information into manageable chunks is required. With NoSQL solutions, organizations have started practicing automatic sharding techniques as a mean to continue to store data while minimizing downtime.

    The loads of the required system can be elastically managed using automatic sharding. With smart technologies around, it is possible to configure the system proactively, which can automatically create shards based on demand. The strategy may vary depending upon the type of data, users information and users distribution across regions. For example, if you have a site with large user base having maximum active users from US region than Asia, then it make sense to shard your database from regional perspective.


  62. HBase Interview Questions

  63. 41. What Is The Impact Of Google’s Mapreduce In The Nosql Movement?

    Google published a paper on MapReduce in 2004, which talked about simplified data processing on large clusters. In this paper, Google shared their process for transforming large volumes of web data content into search indexes using low-cost commodity CPUs. It was Google’s use of MapReduce that encouraged the use of low-cost commodity hardware for such huge applications. Google extended the map-reduce concept to reliably execute on billions of web pages on hundreds or thousands of low-cost commodity CPUs.

    This resulted into building a system that would easily scale as their data increased without forcing them to purchase expensive hardware. That’s where Google invented BigTable to boost their search capabilities. That was first real use of NoSQL columnar data store running on commodity hardware which made a big impact in NoSQL drive.

  64. 42. What Are The Different Kinds Of Nosql Data Stores?

    There are varieties of NoSQL data stores available which can be widely distributed among four categories:

    • Key-value store:
      A simple data storage system that uses a key to access a value. Examples- Redis, Riak, DynamoDB, Memcache
    • Column family store:
      A sparse matrix system that uses a row and a column as keys. Example- HBase, Cassandra, Big Table
    • Graph store:
      For relationship-intensive problems. Example- Neo4j, InfiniteGraph
    • Document store:
      Storing hierarchical data structures directly in the database. Example- MongoDB, CouchDB, Marklogic

     


  65. MongoDB Interview Questions

  66. 43. How Does Nosql Relates To Big Data?

    Big data applications are generally looked from 4 perspectives: Volume, Velocity, Variety and Veracity. Whereas, NoSQL applications are driven by the inability of a current application to efficiently scale. Though volume and velocity are important, NoSQL also focuses on variability and agility.

    NoSQL is often used to store big data. NoSQL stores provide simpler scalability and improved performance relative to traditional RDMS. They help big data moment in a big way by storing unstructured data and providing a means to query them as per requirements. There are different kinds of NoSQL data stores, which are useful for different kind of applications. While evaluating a particular NoSQL solution, one should looks for their requirements in terms of automatic scalability, data loss, payment model etc.

  67. 44. What Are The Features Of Nosql?

    When compared to relational databases, NoSQL databases are more scalable and provide superior performance, and their data model addresses several issues that the relational model is not designed to address:

    • Large volumes of structured, semi-structured, and unstructured data
    • Agile sprints, quick iteration, and frequent code pushes
    • Object-oriented programming that is easy to use and flexible
    • Efficient, scale-out architecture instead of expensive, monolithic architecture
  68. 45. Explain The Difference Between Nosql V/s Relational Database?

    Google needs a storage layer for their inverted search index. They figure a traditional RDBMS is not going to cut it. So they implement a NoSQL data store, BigTable on top of their GFS file system. The major part is that thousands of cheap commodity hardware machines provides the speed and the redundancy.Everyone else realizes what Google just did.Brewers CAP theorem is proven. All RDBMS systems of use are CA systems. People begin playing with CP and AP systems as well. K/V stores are vastly simpler, so they are the primary vehicle for the research.

    Software-as-a-service systems in general do not provide an SQL-like store. Hence, people get more interested in the NoSQL type stores.I think much of the take-off can be related to this history. Scaling Google took some new ideas at Google and everyone else follows suit because this is the only solution they know to the scaling problem right now. Hence, you are willing to rework everything around the distributed database idea of Google because it is the only way to scale beyond a certain size.

300+ [REAL TIME] Natural Language Processing Interview Questions

  1. 1. What Is Nlp?

    Natural Language Processing or NLP is an automated way to understand or analyze the natural languages and extract required information from such data by applying machine learning Algorithms.

  2. 2. List Some Components Of Nlp?

    Below are the few major components of NLP.

    Entity extraction:

    It involves segmenting a sentence to identify and extract entities, such as a person (real or fictional), organization, geographies, events, etc.

    Syntactic analysis:

    It refers to the proper ordering of words.

    Pragmatic analysis:

    Pragmatic Analysis is part of the process of extracting information from text.


  3. Python Interview Questions

  4. 3. List Some Areas Of Nlp?

    Natural Language Processing can be used for

    • Semantic Analysis
    • Automatic summarization
    • Text classification
    • Question Answering

    Some real-life example of NLP is IOS Siri, the Google assistant, Amazon echo.

  5. 4. Define The Nlp Terminology?

    NLP Terminology is based on the following factors:

    Weights and Vectors:

    TF-IDF, length(TF-IDF, doc), Word Vectors, Google Word Vectors

    Text Structure:

    Part-Of-Speech Tagging, Head of sentence, Named entities

    Sentiment Analysis:

    Sentiment Dictionary, Sentiment Entities, Sentiment Features

    Text Classification:

    Supervised Learning, Train Set, Dev(=Validation) Set, Test Set, Text Features, LDA.

    Machine Reading:

    Entity Extraction, Entity Linking,dbpedia, FRED (lib) / Pikes.


  6. Python Tutorial

  7. 5. What Is The Significance Of Tf-idf?

    Tf–idf or TF IDF stands for term frequency–inverse document frequency. In information retrieval TF IDF is is a numerical statistic that is intended to reflect how important a word is to a document in a collection or in the collection of a set.


  8. Computer Science Engineering Interview Questions

  9. 6. What Is Part Of Speech (pos) Tagging?

    According to The Stanford Natural Language Processing Group :

    • A Part-Of-Speech Tagger (POS Tagger) is a piece of software that reads text in some language and assigns parts of speech to each word (and other token), such as noun, verb, adjective, etc.
    • PoS taggers use an algorithm to label terms in text bodies. These taggers make more complex categories than those defined as basic PoS, with tags such as “noun-plural” or even more complex labels. Part-of-speech categorization is taught to school-age children in English grammar, where children perform basic PoS tagging as part of their education.
  10. 7. What Is Pragmatic Analysis In Nlp?

    Pragmatic Analysis:

    It deals with outside word knowledge, which means knowledge that is external to the documents and/or queries. Pragmatics analysis that focuses on what was described as interpreted by what it actually meant, deriving the various aspects of language that require real-world knowledge.


  11. Artificial Intelligence Tutorial
    Artificial Intelligence Interview Questions

  12. 8. Explain Dependency Parsing In Nlp?

    Dependency Parsing is also known as Syntactic Parsing. It is the task of recognizing a sentence and assigning a syntactic structure to it. The most widely used syntactic structure is the parse tree which can be generated using some parsing algorithms. These parse trees are useful in various applications like grammar checking or more importantly it plays a critical role in the semantic analysis stage.

  13. 9. What Is Pac Learning?

    PAC (Probably Approximately Correct) learning is a learning framework that has been introduced to analyze learning algorithms and their statistical efficiency.


  14. Go (programming language) Interview Questions

  15. 10. What Are The Different Categories You Can Categorized The Sequence Learning Process?

    • Sequence prediction
    • Sequence generation
    • Sequence recognition
    • Sequential decision

  16. Go (programming language) Tutorial

  17. 11. What Is Sequence Learning?

    Sequence learning is a method of teaching and learning in a logical manner.


  18. Machine learning Interview Questions

  19. 12. What Is The General Principle Of An Ensemble Method And What Is Bagging And Boosting In Ensemble Method?

    The general principle of an ensemble method is to combine the predictions of several models built with a given learning algorithm in order to improve robustness over a single model. Bagging is a method in ensemble for improving unstable estimation or classification schemes. While boosting method are used sequentially to reduce the bias of the combined model. Boosting and Bagging both can reduce errors by reducing the variance term.


  20. Python Interview Questions

  21. 13. What Is The Difference Between Heuristic For Rule Learning And Heuristics For Decision Trees?

    The difference is that the heuristics for decision trees evaluate the average quality of a number of disjointed sets while rule learners only evaluate the quality of the set of instances that is covered with the candidate rule.


  22. OpenNLP Tutorial

300+ [UPDATED] Telemarketing Interview Questions

  1. 1. What Do You Understand By B2b, B2c And B2g?

    B2B
    stands for business to business which describes commercial transactions between businesses such as between a web development firm and a reseller. Such transactions are big in volume and thus b2b is flourishing today the most. 

    B2C
    stands for business to consumer and describes transaction between business and consumer. It can be best explained by defining retail in which tangible goods are sold from stores or fixed location directly to the consumer. 

    B2G
    stands for business to government and is a derivative of B2B and is described as transaction between business and government in which a business entity supplies services or goods to a government sector or firm and can be stated as public sector marketing.

  2. 2. How Do You Think Telemarketing Can Be Made More Effective?

    To make telemarketing more efficient every call should be planned with an objective in mind. The opening statements made by the telecaller should be made interesting to grab interest. It is the contribution of each tele-caller that makes telemarketing successful and thus each tele-caller should be motivated enough to make a deal and be clear with his objective. The objection for tele callers in the sector are easy to crack and thus a response to each should be prepared in advance to cross the hurdle and make sale.

  3. Sales Management Interview Questions

  4. 3. How Would You Make A Cold Call To Generate A Lead?

    Turning a cold call into lead is not easy but it is not impossible either. If handled with the right motivation and skill a cold call can be changed into a lead. It can actually be put down as the way that cold calls are perceived by the person making the call and the fear of failure. The key should be to back track for a moment and then offer what you have when you have completely understood what is the landscape of the situation. Always step in with a positive attitude as for this situation it is very important.

  5. 4. Do You Think Unwanted Marketing Should Be A Crime?

    Unwanted marketing is a crime by the law but one can opt out of it by filing a petition or a request in court. Only some firms are exempted from this law. To avoid unwanted marketing one can file a request in the court to be in the list of numbers which are not to be called by telemarketing firms. I personally think that one should not barge into somebody’s private space causing any form of inconvenience to the end user. Telemarketing should be handled more responsibly to avoid such situations and maintain standards of marketing and not degrade them.

  6. Sales Management Tutorial

  7. 5. What Should Be The Priority In Telemarketing According To You?

    According to me customers should be the priority, if they do not buy we can not sell and thus they should be the foremost priority. The whole marketing sector runs due to end consumer so it should be our responsibility to make sure we have our customers satisfied and deliver what they expect from us. This can be only achieved if every individual from a telecaller to the ceo takes up the responsibility of delivering to the customer and works on their part the right way. If this little bit is done then telemarketing can be made more effective, easy and set to the right perspective.

  8. Corporate Governance and Business Ethics

  9. 6. What Are The Factors That Contribute To Projecting A Positive Image Of Yourself To The Customer?

    In the telemarketing industry it is critical that you know how to use your voice and choose your words to project a positive image while placing or taking a call. Factors that impact image over the telephone are:

    • Vocal quality
    • Vocal tone
    • Rate of speech
    • Pitch of the tone
    • Attitude
    • Body language
    • Use of appropriate words
  10. 7. What Are The Techniques That Help Build The Trust Of The Caller?

    The following techniques help you build the trust of the customer on the other end of the telephone.

    • Speak confidently
    • Take control of the situation
    • Show genuine interest
    • Go above and beyond the call of duty
  11. Corporate Governance and Business Ethics Tutorial
    Business Development

  12. 8. What Are The Disclosures Required In Telemarketing?

    There are a few disclosures which are must before a person engages in telemarketing. Listed below are a list of things that need to be disclosed.

    1. Disclosure of identity on behalf of which the call is made at the starting of the call in a polite and fair manner.
    2. There should be a proper description of the product or business being put forward.
    3. The price and terms and conditions related to the product should be made very clear.
    4. Any other information related to the product prescribed should be conveyed.
  13. 9. What Is Deceptive Telemarketing And How Can It Be Avoided?

    Telemarketing is termed deceptive when misleading information of product is conveyed to attract customers. Deceptive telemarketing can be avoided by taking the following measures :

    1. No telemarketer should represent a product with false or misleading information.
    2. Lottery, chance and skill based offers should not be offered where :
      ? Delivery of prize is conditional and is not conveyed at begining.
      ? Information about the prize is incorrect.
    3. Offering products at no cost or less price when based on terms and conditions not specified before purchase should not be carried out.
    4. Selling products at a very high rate.
  14. Sales

  15. 10. What Are Good Telephone Etiquettes?

    When attending a client or customer on phone some basic telephone etiquettes should be followed. Following are telephone etiquettes that should be followed by a tele caller :

    1. Be quick in answering the phone.
    2. Always make sure that the customer is greeted well.
    3. When putting a line on hold take permission prior to it from the customer on the other end.
    4. When transferring a call make sure you do it the right way and make it polite.
    5. At the end of a call make sure that the customer is satisfied and does not have any query or doubt in mind.
  16. Sales Forecasting Tutorial

  17. 11. What Will You Do When A Customer Needs To Be Put On Hold But Is Not Agreeing To It?

    Many times the customer on the end line objects to be put on hold as he fears to be kept long on hold, in such situations following tips can be helpful :

    1. Request the customer that it is important that he is put on hold. Make sure this is conveyed in a very polite way.
    2. Clear out the objective due to which the line needs to be put on hold.
    3. If the customer is very persistent on being on line as a co employee to retrieve the information that requires you to put the line on hold. During this be on line and keep the customer attended.
    4. You can also ask the customer to disconnect the line for now and that you will give him a call back.
  18. Tele Sales

  19. 12. You Just Called A Customer. What Are The Steps You Will Follow During The Call?

    When making a call to a customer the call should be directed in the following way:

    1. Greet customer politely.
    2. Introduce yourself to the customer.
    3. Make clear the objective of the call
      – Who are you calling?
      – What are you calling for?
    4. Give complete information of the product or business you are promoting.
    5. Make the customer understand how the objective of the call can be beneficial for him/her.
    6. Close the call with a warm end note.
  20. Sales Management Interview Questions

  21. 13. How Can A Call Be Made Successful?

    Following tips can help in make a call successful :

    1. Make sure the first impression of the call is good as it is very important for the rest of the call.
    2. Be professional yet courteous.
    3. It is very important to be a dedicated to a call, the opposite can be sensed very easily on a call.
    4. Clear the objective of the call before you make one.
    5. At times it is important not to sell over the phone instead connect with the customer and make the deal in the business place.
  22. Sales Planning Tutorial

  23. 14. List Some Of The Effective Listening Strategies That Would Be Helpful In The Telemarketing Industry?

    For survival in the telemarketing industry it is extremely essential that one possesses or acquires the ability to effectively listen and comprehend. Some of the effective listening strategies are as follows:

    • Understanding yourself
    • Being yourself
    • Never losing the personal touch
    • Your attitude
    • Be willing to listen
    • Setting personal goals
    • Being motivated
    • Listening actively
    • Paying attention
    • Asking questions
    • Sending appropriate feedback
  24. 15. What Are The Different Categories And Sub-categories Of Telemarketing?

    Marketing can be broadly classified into two categories :

    1. B2B – business to business
    2. B2C – business to customer

    The categories can be further classified into four categories based on the process carried out which are :

    Generating lead
    – Process of identifying a potential customer to make sale.

    Sales
    – Selling out products

    Outbound
    – Calls are made to the customers.

    Inbound
    – Calls are received from the customer.

  25. Channel Management

  26. 16. What Are The Characteristics That Distinguish Direct Marketing?

    Following are the characteristics that distinguish direct marketing :

    1. The customers are pre targeted.
    2. Customers are addressed directly.
    3. The response of direct marketing is scalable.
    4. The whole process is action driven.
    5. Independent of business size.
  27. 17. Explain The Following Terms :
    1. Cold Calling
    2. Spamming.
    3. Automatic Dialer

    1. Cold calling : Cold calling is the process in which customers are called for business interaction who have not been expecting so.
    2. Spamming : Random bulk messages used for telemarketing purposes are called spam and the process is termed as spamming
    3. Auto dialer : Telemarketing industries generally use an electronic device or a software to automatically dial phone numbers. These devices and softwares are called autodialers.
  28. Technical Support

  29. 18. What Issues Can Rise From Paying Per Appointment?

    Here are a list of problems which are associated with paying per appointment :

    1. Poor quality.
    2. Rate of conversion is low.
    3. Requirements for more meetings is not solved.
    4. High competition.
    5. Suppliers face a lot of problems.
    6. Quality is deteriorated by quantity.
  30. Corporate Governance and Business Ethics

  31. 19. What Are The Common Telemarketing Fraud?

    Following are the common telemarketing fraud :

    1. Charity purpose.
    2. Asking for advance payment or fee.
    3. Fraud by over capturing increased payments.
    4. Bank related frauds.
    5. Lottery
    6. False representation of office supplies.
    7. False verification calls.