Übersicht

Vorschläge max.2 pro Tag

Platz für Vorschläge, Fragen, Anderes

Wenn sie Antworten erhalten wollen tragen sie hier Kontaktdaten wie email-Adresse oder Telefonnummer oder Postanschrift ein

CAPTCHA
Sicherheitscheck: Tragen sie die abgebildeten Buchstaben und/oder Zahlen hier unter in das freie Feld ein.
Image CAPTCHA
Enter the characters shown in the image.

Linux - here we go

Umfrage

Wie gefällt euch/ihnen diese Seite:

Vorschläge und Wünsche bitte an: support@webjoke.de.

Benutzeranmeldung

CAPTCHA
Sicherheitscheck: Tragen sie die abgebildeten Buchstaben und/oder Zahlen hier unter in das freie Feld ein.
Image CAPTCHA
Enter the characters shown in the image.

Agaric Collective: Migrating XML files into Drupal

Drupal News - vor 6 Stunden 5 Minuten

Today we will learn how to migrate content from a XML file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. We will also talk about the difference between two data parsers provided the module. The example includes node, images, and paragraphs migrations. Let’s get started.

Note: Migrate Plus has many more features. For example, it contains source plugins to import from JSON files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing XML files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD XML source migration whose machine name is ud_migrations_xml_source. It comes with four migrations: udm_xml_source_paragraph, udm_xml_source_image, udm_xml_source_node_local, and udm_xml_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain XML migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from XML. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

<?xml version="1.0" encoding="UTF-8" ?> 1 Michele Metts P01 B10 ... ... B10 The definite guide to Drupal 7 Benjamin Melançon et al. ... ... P01 https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg 240 351 ... ...

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a XML file

In any migration project, understanding the source is very important. For XML migrations, there are two major considerations. First, where in the XML tree hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. You use an XPath expression to select a set of nodes from the XML document. In this article, the term element when referring to an XML document node to distinguish it from a Drupal node.  Second, when you get to the set of elements that you want to import, what child elements are going to be made available to the migration. It is possible that each element contains more data than needed. In XML imports, you have to manually include the child elements that will be required for the migration. The following code snippet shows part of the local XML file relevant to the node migration:

<?xml version="1.0" encoding="UTF-8" ?> 1 Michele Metts P01 B10 ... ...

The set of elements containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each element returned by the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local XML file for the node migration:

source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to xml. The urls configuration contains an array of file paths relative to the Drupal root. In the example we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

Technical note: Migrate Plus provides two data parser plugins for XML files. xml uses XMLReader while simple_xml uses SimpleXML. The parser to use is configured in the data_parser_plugin configuration. Also note that when you use the xml parser, the data_fetcher_plugin setting is ignored. More details below.

The item_selector configuration indicates where in the XML file lies the set of elements to be migrated. Its value is an XPath expression used to traverse the file hierarchy. In this case, the value is /data/udm_people. Verify that your expression is valid and select the elements you intend to import. It is common that it starts with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the subtree specified by the item_selector configuration. In the example, the fields are direct children of the elements to migrate. Therefore, the XPath expression only includes the element name (e.g., unique_id). If you had nested elements, you could use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_xml_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_xml_source_image - udm_xml_source_paragraph optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two XML migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a XML file

Let’s consider an example where the elements to migrate have many levels of nesting. The following snippets show part of the local XML file and source plugin configuration for the paragraph migration:

<?xml version="1.0" encoding="UTF-8" ?> B10 The Definitive Guide to Drupal 7 Benjamin Melançon et al. ... ... source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_book_paragraph fields: - name: src_book_id label: 'Book ID' selector: book_id - name: src_book_title label: 'Title' selector: book_details/title - name: src_book_author label: 'Author' selector: book_details/author ids: src_book_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Particularly, the book_details element has two children: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy could be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the XML file is having two children. paragraph_id would contain the unique identifier for the record. paragraph_data would contain a child element to specify the paragraph type. It would also have an arbitrary number of extra child elements with the data to be migrated. In the process section, you would iterate over the children to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process: field_ud_book_paragraph_title: src_book_title field_ud_book_paragraph_author: src_book_authorMigrating images from a XML file

Let’s consider an example where the elements to migrate have more data than needed. The following snippets show part of the local XML file and source plugin configuration for the image migration:

  P01 https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg       240       351             ...         ...   source: plugin: url # This configuration is ignored by the 'xml' data parser plugin. # It only has effect when using the 'simple_xml' data parser plugin. data_fetcher_plugin: file # Set to 'xml' to use XMLReader https://www.php.net/manual/en/book.xmlreader.php # Set to 'simple_xml' to use SimpleXML https://www.php.net/manual/en/ref.simplexml.php data_parser_plugin: xml urls: - modules/custom/ud_migrations/ud_migrations_xml_source/sources/udm_data.xml # XPath expression. It is common that it starts with a slash (/). item_selector: /data/udm_photos fields: - name: src_photo_id label: 'Photo ID' selector: photo_id - name: src_photo_url label: 'Photo URL' selector: photo_url ids: src_photo_id: type: string

 

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: src_photo_url

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image elements and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the elements with image data have extra children that are not used in the migration. Particularly, the photo_dimensions element has two children representing the width and height of the image. To ignore this subtree, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/width and photo_dimensions/height, respectively.

XML file location

Important: What is described in this section only applies when you use either (1) the xml data parser or (2) the simple_xml parser with the file data fetcher.

When using the file data fetcher plugin, you have three options to indicate the location to the XML files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/xml_files/example.xml.
  • Use an absolute path pointing to the XML location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/xml_files/example.xml.
  • Use a fully-qualified URL to any built-in wrapper like http, https, ftp, ftps, etc. For example, https://understanddrupal.com/xml-files/example.xml.
  • Use a custom stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

Migrating remote XML files

Important: What is described in this section only applies when you use the http data fetcher plugin.

Migrate Plus provides another data fetcher plugin named http. Under the hood, it uses the Guzzle HTTP Client library. You can use it to fetch files using any protocol supported by curl like http, https, ftp, ftps, sftp, etc. In a future blog post we will explain this data fetcher in more detail. For now, the udm_xml_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin, data_parser_plugin, and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote XML file for the node migration:

source: plugin: url data_fetcher_plugin: http # 'simple_xml' is configured to be able to use the 'http' fetcher. data_parser_plugin: simple_xml urls: - https://sendeyo.com/up/d/478f835718 item_selector: /data/udm_people fields: ... ids: ...

And that is how you can use XML files as the source of your migrations. Many more configurations are possible when you use the simple_xml parser with the http fetcher. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

XMLReader vs SimpleXML in Drupal migrations

As noted in the module’s README file, the xml parser plugin uses the XMLReader interface to incrementally parse XML files. The reader acts as a cursor going forward on the document stream and stopping at each node on the way. This should be used for XML sources which are potentially very large. On the other than, the simple_xml parser plugin uses the SimpleXML interface to fully parse XML files. This should be used for XML sources where you need to be able to use complex XPath expressions for your item selectors, or have to access elements outside of the current item element via XPath.

What did you learn in today’s blog post? Have you migrated from XML files before? If so, what challenges have you found? Did you know that you can read local and remote files? Did you know that the data_fetcher_plugin configuration is ignored when using the xml data parser? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether it is the migration series or other topics.

Next: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Kategorien: Drupal News

Mediacurrent: Open Waters Podcast Ep. 3: Improving Drupal's Admin UI With Cristina Chumillas

Drupal News - Di, 08/20/2019 - 21:36

Welcome to Mediacurrent’s Open Waters, a podcast about open source solutions. In this episode, we catch up with Cristina Chumillas. Cristina comes from the design world and is passionate about front-end development. She works at Lullabot (though when we recorded this, she worked at Ymbra) and has been involved in the Drupal community for years, contributing with code, design, and organizing events. Her contributions to Drupal Core are mainly focused on front-end, design and UX. Nowadays, she's a co-organizer of the Drupal Admin UI & JS Modernization Initiative and a Drupal core UX maintainer.



Audio Download Link

Project Pick

 Claro

Interview with Cristina Chumillas
  1. Tell us about yourself: What is your role, who do you work for, and where are you from?
  2. You are a busy woman, what events have you recently attended and/or are scheduled to attend in the near future?
  3. Which Drupal core initiatives are you currently contributing to?
  4. How does a better admin theme UI help site owners?  
  5. What are the main goals?
  6. Is this initiative sponsored by anyone? 
  7. Who is the target for the initiative? 
  8. How is the initiative organized? 
  9. What improvements will it bring in a short/mid/long term?
  10. How can people get involved in helping with these initiatives?
Quick-takes
  •  Currently contributing to The Out of the Box initiative for a while, together with podcast co-host Mario
  • 3 reasons why Drupal needs a better admin theme UI: Content Productivity, savings, less frustration
  • Main goals: We have 2 separate paths: the super-fancy JS app that will land in an undefined point in the future and Claro as the new realistic & releasable short term work that will introduce improvements on each release.
  • Why focus on admin UI?  We’re focusing on the content author's experience because that’s one of the main pain points mentioned in an early survey we did last year.)
  • How is the initiative organized? JS, UX&User studies, New design system (UI), Claro (new theme)
  • What improvements will it bring in a short/mid/long term? Short: New theme/UI, Mid: editor role with specific features, autosave, Long: JS app. 

That’s it for today’s show, thanks for joining us!  Looking for more useful tips, technical takeaways, and creative insights? Visit mediacurrent.com/podcast for more episodes and to subscribe to our newsletter.

Kategorien: Drupal News

Fuse Interactive: What does Drupal 7 End of Life mean for your business?

Drupal News - Di, 08/20/2019 - 20:08
What does Drupal 7 End of Life mean for your business? It's been a great run, but it will soon be time to say goodbye to an old friend. Over the last 8+ years, Drupal 7 has served our clients well. During that time we're thankful to have worked on 100+ Drupal 7 websites for some great organizations from non-profits, to telecoms. While we have been building all our projects on Drupal 8 for the last couple of years, Drupal 7 has continued to be a stable and effective business tool for many of our clients. In an announcement by Dries Buytaert at Drupal Europe (September 2018), Drupal 7 (and 8) will reach End of Life in November 2021 while Drupal 9 is scheduled to be released in 2020. In this post we hope to answer some of the questions you may have as Drupal 7 or 8 site owners / managers regarding the implications of this End of Life date. Greg Gillingham Tue, 08/20/2019 - 11:08
Kategorien: Drupal News

Gibt es eine Möglichkeit VMs im vCenter - ESXi in Ordner zu strukturieren?

Virtualisierungen - Di, 08/20/2019 - 09:34
Frage: Hallo,ich habe im vCenter (6.7U2) mehrere ESXi Server und manche Server habe viele unterschiedliche VMs, ich würde gerne für die Übersichtlichkeit Ordner anlegen und die VMs da unterbringen. Ressourcenpools sollte man ja besser nicht nehmen um die Leistung nicht zu beeinflussen, aber von der Übersichtlichkeit her wäre es genau das was ich gerne hätte. Ich habe gesehen dass man im vCenter wohl global Ordner erstellen kann, aber das möchte ich in diesem Fall nicht. Hat jemand eine Idee ob das umsetzbar ist?. 7 Kommentare, 211 mal gelesen.
Kategorien: Anleitungen

Drupal blog: Low-code and no-code tools continue to drive the web forward

Drupal News - Mo, 08/19/2019 - 23:34

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

Low-code and no-code tools for the web are on a decade-long rise; they enable self-service for marketers, and allow developers to focus on innovation.

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product HuntRyan Hoover, said in a blog post: "As creating things on the internet becomes more accessible, more people will become makers."

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster (FTP hand-written HTML files, anyone?) has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only be done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Kategorien: Drupal News

Jacob Rockowitz: Requesting a medical appointment online begins a patient's digital journey

Drupal News - Mo, 08/19/2019 - 18:59

Experience

My experience with healthcare, Drupal, and webforms

For the past 20 years, I have worked in healthcare helping Memorial Sloan Kettering Cancer Center (MSKCC) evolve their digital platform and patient experience. About ten years ago, I persuaded MSKCC to switch to Drupal 6, which was followed by a migration to Drupal 8. More recently, I have become the maintainer of the Webform module for Drupal 8. Now, I want to leverage my experience and expertise in healthcare, webforms, and Drupal, to start exploring how we can improve patient and caregiver’s digital experience related to online appointment requests.

It’s important that we understand the problem/challenge of requesting an appointment online, examine how hospitals are currently solving this problem, and then offer some recommendations and ways to improve existing approaches. Instead of writing one very long blog post, I’m going to break up this discussion into a series of three blog posts. This initial post is going to address the patient journey and experience around an appointment request form.

These blog posts are not Drupal-specific, but my goal is to create and share an exemplary "Request an appointment" form template for the Webform module for Drupal 8.

Improving patient and caregiver’s digital experience

Improving the patient and caregiver digital experience is a very broad, massive, and challenging topic. Personally, my goal when working with doctors, researcher, and caregivers is…

Making things "easy" for patients and caregivers in healthcare is easier said...Read More

Kategorien: Drupal News

Agaric Collective: Adding HTTP request headers and authentication to remote JSON and XML in Drupal migrations

Drupal News - Mo, 08/19/2019 - 16:45

In the previous two blog posts, we learned to migrate data from JSON and XML files. We presented to configure the migrations to fetch remote files. In today's blog post, we will learn how to add HTTP request headers and authentication to the request. . For HTTP authentication, you need to choose among three options: Basic, Digest, and OAuth2. To provide this functionality, the Migrate API leverages the Guzzle HTTP Client library. Usage requirements and limitations will be presented. Let's begin.

Migrate Plus architecture for remote data fetching

The Migrate Plus module provides an extensible architecture for importing remote files. It makes use of different plugin types to fetch file, add HTTP authentication to the request, and parse the response. The following is an overview of the different plugins and how they work together to allow code and configuration reuse.

Source plugin

The url source plugin is at the core of the implementation. Its purpose is to retrieve data from a list of URLs. Ingrained in the system is the goal to separate the file fetching from the file parsing. The url plugin will delegate both tasks to other plugin types provided by Migrate Plus.

Data fetcher plugins

For file fetching, you have two options. A general-purpose file fetcher for getting files from the local file system or via stream wrappers. This plugin has been explained in detail on the posts about JSON and XML migrations. Because it supports stream wrapper, this plugin is very useful to fetch files from different locations and over different protocols. But it has two major downsides. First, it does not allow setting custom HTTP headers nor authentication parameters. Second, this fetcher is completely ignored if used with the xml or soap data parser (see below).

The second fetcher plugin is http. Under the hood, it uses the Guzzle HTTP Client library. This plugin allows you to define a headers configuration. You can set it to a list of HTTP headers to send along with the request. It also allows you to use authentication plugins (see below). The downside is that you cannot use stream wrappers. Only protocols supported by curl can be used: http, https, ftp, ftps, sftp, etc.

Data parsers plugins

Data parsers are responsible for processing the files considering their type: JSON, XML, or SOAP. These plugins let you select a subtree within the file hierarchy that contains the elements to be imported. Each record might contain more data than what you need for the migration. So, you make a second selection to manually indicate which elements will be made available to the migration. Migrate plus provides four data parses, but only two use the data fetcher plugins. Here is a summary:

  • json can use any of the data fetchers. Offers an extra configuration option called include_raw_data. When set to true, in addition to all the fields manually defined, a new one is attached to the source with the name raw. This contains a copy of the full object currently being processed.
  • simple_xml can use any data fetcher. It uses the SimpleXML class.
  • xml does not use any of the data fetchers. It uses the XMLReader class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.
  • xml does not use any data fetcher. It uses the SoapClient class to directly fetch the file. Therefore, it is not possible to set HTTP headers or authentication.

The difference between xml and simple_xml were presented in the previous article.

Authentication plugins

These plugins add authentication headers to the request. If correct, you could fetch data from protected resources. They work exclusively with the http data fetcher. Therefore, you can use them only with json and simple_xml data parsers. To do that, you set an authentication configuration whose value can be one of the following:

  • basic for HTTP Basic authentication.
  • digest for HTTP Digest authentication.
  • oauth2 for OAuth2 authentication over HTTP.

Below are examples for JSON and XML imports with HTTP headers and authentication configured. The code snippets do not contain real migrations. You can also find them in the ud_migrations_http_headers_authentication directory of the demo repository https://github.com/dinarcon/ud_migrations.

Important: The examples are shown for reference only. Do not store any sensitive data in plain text or commit it to the repository.

JSON and XML Drupal migrations with HTTP request headers and Basic authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the basic authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: basic username: totally password: insecure fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: pageJSON and XML Drupal migrations with HTTP request headers and Digest authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept: 'application/json; charset=utf-8' Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the digest authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: digest username: totally password: insecure fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: pageJSON and XML Drupal migrations with HTTP request headers and OAuth2 authentication. source: plugin: url data_fetcher_plugin: http # Choose one data parser. data_parser_plugin: json|simple_xml urls: - https://understanddrupal.com/files/data.json item_selector: /data/udm_root # This configuration is provided by the http data fetcher plugin. # Do not disclose any sensitive information in the headers. headers: Accept: 'application/json; charset=utf-8' Accept-Encoding: 'gzip, deflate, br' Accept-Language: 'en-US,en;q=0.5' Custom-Key: 'understand' Arbitrary-Header: 'drupal' # This configuration is provided by the oauth2 authentication plugin. # Credentials should never be saved in plain text nor committed to the repo. autorization: plugin: oauth2 grant_type: client_credentials base_uri: https://understanddrupal.com token_url: /oauth2/token client_id: some_client_id client_secret: totally_insecure_secret fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_title label: 'Title' selector: title ids: src_unique_id: type: integer process: title: src_title destination: plugin: 'entity:node' default_bundle: page

What did you learn in today’s blog post? Did you know the configuration names for adding HTTP request headers and authentication to your JSON and XML requests? Did you know that this was limited to the parsers that make use of the http fetcher? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Kategorien: Drupal News

Agiledrop.com Blog: Top 10 Drupal Accessibility Modules

Drupal News - Mo, 08/19/2019 - 12:09

In this post, we'll take a look at some of the most useful modules that will help make your Drupal site more accessible to developers, content editors and users alike.

READ MORE
Kategorien: Drupal News

Vhdx von Disk erstellen

Virtualisierungen - Mo, 08/19/2019 - 11:48
Frage: Hallo,ich möchte gerne unter Linux eine vhdx von einer Disk erstellen. Die Disk ist jedoch 3 TB groß, wobei nur 600 GB beschrieben sind. Daher würde ich gerne in einem Schritt sowohl die Erstellung und Konvertierung in vhdx ohne die leeren Bereiche. Ich habe schon bei qemu gesucht, jedoch finde ich nichts passendes. Nur den Umweg über dd. Das dauert mir aber zum einen zu Lange, da am Anfang die leeren Blöcke mit übernommen werden. Habt Ihr eine Idee?Ich glaube ich formulier nochmal meine Frage:Ich habe eine Windows 10 ... 9 Kommentare, 271 mal gelesen.
Kategorien: Anleitungen

Dries Buytaert: Low-code and no-code tools continue to drive the web forward

Drupal News - Mo, 08/19/2019 - 10:35

A version of this article was originally published on Devops.com.

Twelve years ago, I wrote a post called Drupal and Eliminating Middlemen. For years, it was one of the most-read pieces on my blog. Later, I followed that up with a blog post called The Assembled Web, which remains one of the most read posts to date.

The point of both blog posts was the same: I believed that the web would move toward a model where non-technical users could assemble their own sites with little to no coding experience of their own.

This idea isn't new; no-code and low-code tools on the web have been on a 25-year long rise, starting with the first web content management systems in the early 1990s. Since then no-code and low-code solutions have had an increasing impact on the web. Examples include:

While this has been a long-run trend, I believe we're only at the beginning.

Trends driving the low-code and no-code movements

According to Forrester Wave: Low-Code Development Platforms for AD&D Professionals, Q1 2019, In our survey of global developers, 23% reported using low-code platforms in 2018, and another 22% planned to do so within a year..

Major market forces driving this trend include a talent shortage among developers, with an estimated one million computer programming jobs expected to remain unfilled by 2020 in the United States alone.

What is more, the developers who are employed are often overloaded with work and struggle with how to prioritize it all. Some of this burden could be removed by low-code and no-code tools.

In addition, the fact that technology has permeated every aspect of our lives — from our smartphones to our smart homes — has driven a desire for more people to become creators. As the founder of Product Hunt Ryan Hoover said in a blog post: As creating things on the internet becomes more accessible, more people will become makers..

But this does not only apply to individuals. Consider this: the typical large organization has to build and maintain hundreds of websites. They need to build, launch and customize these sites in days or weeks, not months. Today and in the future, marketers can embrace no-code and low-code tools to rapidly develop websites.

Abstraction drives innovation

As discussed in my middleman blog post, developers won't go away. Just as the role of the original webmaster has evolved with the advent of web content management systems, the role of web developers is changing with the rise of low-code and no-code tools.

Successful no-code approaches abstract away complexity for web development. This enables less technical people to do things that previously could only by done by developers. And when those abstractions happen, developers often move on to the next area of innovation.

When everyone is a builder, more good things will happen on the web. I was excited about this trend more than 12 years ago, and remain excited today. I'm eager to see the progress no-code and low-code solutions will bring to the web in the next decade.

Kategorien: Drupal News

Veeam Backup von Remote VM

Virtualisierungen - So, 08/18/2019 - 20:27
Frage: Hallo,ich habe eine kurze Frage zu einem BackupJob, den ich gerne mit Veeam B&R durchführen möchte. Am besten regelmäßig...Ein Server und ein NAS stehen bei mir an Standort A und ich möchte den die Vsphere VM auf dem Server mittels Veeam auf das NAS sichern.Einen Rechner mit Veeam und allen Einstellungen habe ich aber an Standort B und greife über das WAN (VPN) auf Standort A zu.Wenn ich das Backup nun einrichte, wird denn VEEAM das so steuern, dass das Backup von dem Server direkt auf das NAS kommt?? ... 6 Kommentare, 478 mal gelesen.
Kategorien: Anleitungen

Bei Neuaufbau auf Core-Server setzen?

Virtualisierungen - So, 08/18/2019 - 19:25
Frage: Hallo zusammen,ich habe vor einigen Monaten die Verantwortung für eine EDV-Landschaft übernommen die seit Jahrenvon einem Dienstleister in den Ruin getrieben wurde.Nachdem ich nun soweit aufgeräumt und einigermaßen durchgeblickt habe, steht allerdings fest, dassdie ehem. SBS-Landschaft, welche auf 2012 R2 migriert wurde, weg muss.Aktuelle Server 2019 Datacenter Lizenz ist gekauft, alles soweit klar.Der Unterbau wird ein Hyper-V Server und darauf sollen die üblichen Dienste:* ActiveDirectory* DNS* DHCP* Printserver* Veeam-Backup-System* Fileserver* diverse Applikationsserver, sofern Trennungen sinnvoll sindNun ist ja schon seit geraumer Zeit der Core-Server das Mittel zum Zweck, gilt ... 33 Kommentare, 1177 mal gelesen.
Kategorien: Anleitungen

Agaric Collective: Migrating JSON files into Drupal

Drupal News - So, 08/18/2019 - 15:34

Today we will learn how to migrate content from a JSON file into Drupal using the Migrate Plus module. We will show how to configure the migration to read files from the local file system and remote locations. The example includes node, images, and paragraphs migrations. Let’s get started.

Note: Migrate Plus has many more features. For example, it contains source plugins to import from XML files and SOAP endpoints. It provides many useful process plugins for DOM manipulation, string replacement, transliteration, etc. The module also lets you define migration plugins as configurations and create groups to share settings. It offers a custom event to modify the source data before processing begins. In today’s blog post, we are focusing on importing JSON files. Other features will be covered in future entries.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD JSON source migration whose machine name is ud_migrations_json_source. It comes with four migrations: udm_json_source_paragraph, udm_json_source_image, udm_json_source_node_local, and udm_json_source_node_remote.

You can get the Migrate Plus module using composer: composer require 'drupal/migrate_plus:^5.0'. This will install the 8.x-5.x branch where new development will happen. This branch was created to introduce breaking changes in preparation for Drupal 9. As of this writing, the 8.x-4.x branch has feature parity with the newer branch. If your Drupal site is not composer-based, you can download the module manually.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migrations. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON. In fact, three of the migrations will read from the same file. The following snippet shows a reduced version of the file to get a sense of its structure:

{ "data": { "udm_people": [ { "unique_id": 1, "name": "Michele Metts", "photo_file": "P01", "book_ref": "B10" }, {...}, {...} ], "udm_book_paragraph": [ { "book_id": "B10", "book_details": { "title": "The definite guide to Drupal 7", "author": "Benjamin Melançon et al." } }, {...}, {...} ], "udm_photos": [ { "photo_id": "P01", "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg", "photo_dimensions": [240, 351] }, {...}, {...} ] } }

Note: You can literally swap migration sources without changing any other part of the migration.  This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating nodes from a JSON file

In any migration project, understanding the source is very important. For JSON migrations, there are two major considerations. First, where in the file hierarchy lies the data that you want to import. It can be at the root of the file or several levels deep in the hierarchy. Second, when you get to the array of records that you want to import, what fields are going to be made available to the migration. It is possible that each record contains more data than needed. For improved performance, it is recommended to manually include only the fields that will be required for the migration. The following code snippet shows part of the local JSON file relevant to the node migration:

{ "data": { "udm_people": [ { "unique_id": 1, "name": "Michele Metts", "photo_file": "P01", "book_ref": "B10" }, {...}, {...} ] } }

The array of records containing node data lies two levels deep in the hierarchy. Starting with data at the root and then descending one level to udm_people. Each element of this array is an object with four properties:

  • unique_id is the unique identifier for each record within the data/udm_people hierarchy.
  • name is the name of a person. This will be used in the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration to read a local JSON file for the node migration:

source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_people fields: - name: src_unique_id label: 'Unique ID' selector: unique_id - name: src_name label: 'Name' selector: name - name: src_photo_file label: 'Photo ID' selector: photo_file - name: src_book_ref label: 'Book paragraph ID' selector: book_ref ids: src_unique_id: type: integer

The name of the plugin is url. Because we are reading a local file, the data_fetcher_plugin  is set to file and the data_parser_plugin to json. The urls configuration contains an array of file paths relative to the Drupal root. In the example, we are reading from one file only, but you can read from multiple files at once. In that case, it is important that they have a homogeneous structure. The settings that follow will apply equally to all the files listed in urls.

The item_selector configuration indicates where in the JSON file lies the array of records to be migrated. Its value is an XPath-like string used to traverse the file hierarchy. In this case, the value is data/udm_people. Note that you separate each level in the hierarchy with a slash (/).

fields has to be set to an array. Each element represents a field that will be made available to the migration. The following options can be set:

  • name is required. This is how the field is going to be referenced in the migration. The name itself can be arbitrary. If it contained spaces, you need to put double quotation marks (") around it when referring to it in the migration.
  • label is optional. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to the field. Keep using the name.
  • selector is required. This is another XPath-like string to find the field to import. The value must be relative to the location specified by the item_selector configuration. In the example, the fields are direct children of the records to migrate. Therefore, only the property name is specified (e.g., unique_id). If you had nested objects or arrays, you would use a slash (/) character to go deeper in the hierarchy. This will be demonstrated in the image and paragraph migrations.

Finally, you specify an ids array of field names that would uniquely identify each record. As already stated, the unique_id field servers that purpose. The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_json_source_image source: src_photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_json_source_image - udm_json_source_paragraph optional: []

The source for the setting the image reference is src_photo_file. Again, this is the name of the field, not the label nor selector. The configuration of the migration lookup plugin and dependencies point to two JSON migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating paragraphs from a JSON file

Let’s consider an example where the records to migrate have many levels of nesting. The following snippets show part of the local JSON file and source plugin configuration for the paragraph migration:

{ "data": { "udm_book_paragraph": [ { "book_id": "B10", "book_details": { "title": "The definite guide to Drupal 7", "author": "Benjamin Melançon et al." } }, {...}, {...} ] } source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_book_paragraph fields: - name: src_book_id label: 'Book ID' selector: book_id - name: src_book_title label: 'Title' selector: book_details/title - name: src_book_author label: 'Author' selector: book_details/author ids: src_book_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to paragraph records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_book_paragraph as a starting point, the records with paragraph data have a nested structure. Notice that book_details is an object with two properties: title and author. To refer to them, the selectors are book_details/title and book_details/author, respectively. Note that you can go as many level deeps in the hierarchy to find the value that should be assigned to the field. Every level in the hierarchy would be separated by a slash (/).

In this example, the target is a single paragraph type. But a similar technique can be used to migrate multiple types. One way to configure the JSON file is to have two properties. paragraph_id would contain the unique identifier for the record. paragraph_data would be an object with a property to set the paragraph type. This would also have an arbitrary number of extra properties with the data to be migrated. In the process section, you would iterate over the records to map the paragraph fields.

The following snippet shows part of the process configuration of the paragraph migration:

process: field_ud_book_paragraph_title: src_book_title field_ud_book_paragraph_author: src_book_authorMigrating images from a JSON file

Let’s consider an example where the records to migrate have more data than needed. The following snippets show part of the local JSON file and source plugin configuration for the image migration:

{ "data": { "udm_photos": [ { "photo_id": "P01", "photo_url": "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg", "photo_dimensions": [240, 351] }, {...}, {...} ] } } source: plugin: url data_fetcher_plugin: file data_parser_plugin: json urls: - modules/custom/ud_migrations/ud_migrations_json_source/sources/udm_data.json item_selector: data/udm_photos fields: - name: src_photo_id label: 'Photo ID' selector: photo_id - name: src_photo_url label: 'Photo URL' selector: photo_url ids: src_photo_id: type: string

The plugin, data_fetcher_plugin, data_parser_plugin and urls configurations have the same values as in the node migration. The item_selector and ids configurations are slightly different to represent the path to image records and the unique identifier field, respectively.

The interesting part is the value of the fields configuration. Taking data/udm_photos as a starting point, the records with image data have extra properties that are not used in the migration. Particularly, the photo_dimensions property contains an array with two values representing the width and height of the image, respectively. To ignore this property, you simply omit it from the fields configuration. In case you wanted to use it, the selectors would be photo_dimensions/0 for the width and photo_dimensions/1 for the height. Note that you use a zero-based numerical index to get the values out of arrays. Like with objects, a slash (/) is used to separate each level in the hierarchy. You can go as far as necessary in the hierarchy.

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: src_photo_urlJSON file location

When using the file data fetcher plugin, you have three options to indicate the location to the JSON files in the urls configuration:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/json_files/example.json.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/json_files/example.json.
  • Use a stream wrapper.

Being able to use stream wrappers gives you many more options. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://json_files/example.json.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/json_files/example.json.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/json-files/example.json.
Migrating remote JSON files

Migrate Plus provides another data fetcher plugin named http. You can use it to fetch files using the http and https protocols. Under the hood, it uses the Guzzle HTTP Client library. In a future blog post we will explain this data fetcher in more detail. For now, the udm_json_source_node_remote migration demonstrates a basic setup for this plugin. Note that only the data_fetcher_plugin and urls configurations are different from the local file example. The following snippet shows part of the configuration to read a remote JSON file for the node migration:

source: plugin: url data_fetcher_plugin: http data_parser_plugin: json urls: - https://api.myjson.com/bins/110rcr item_selector: data/udm_people fields: ... ids: ...

And that is how you can use JSON files as the source of your migrations. Many more configurations are possible. For example, you can provide authentication information to get access to protected resources. You can also set custom HTTP headers. Examples will be presented in a future entry.

What did you learn in today’s blog post? Have you migrated from JSON files before? If so, what challenges have you found? Did you know that you can read local and remote files? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Kategorien: Drupal News

LVM basierte Linked Clones mit KVM (nicht Proxmox!)

Virtualisierungen - So, 08/18/2019 - 15:10
Frage: Hallo,vielleicht habe ich gerade Tomaten auf den Augen. Jedenfalls bekomme ich kein Base-Image für Linked Clones hin, wenn die VM LVM Volumes verwendet. Wie das mit .qcow2 vDisks geht ist mir klar, aber eben nicht mit LVM Volumes. Hat da jemand einen Schubser für mich in die richtige Richtung?Danke. 0 Kommentare, 265 mal gelesen.
Kategorien: Anleitungen

VMs von Hyper-V auf externer Festplatte

Virtualisierungen - Sa, 08/17/2019 - 21:47
Frage: Hallo,ich möchte gerne von VirtualBox auf Hyper-V umsteigen und würde auch gerne weiterhin meine VMs auf der externen Festplatte speichern und sie von dort ausführen. Dies klappt auch ganz gut, aber eben nur bis ich Windows neustarte. Wenn ich nach dem Neustart eine VM zu starten versuche, bekomme ich von Hyper-V die Fehlermeldung "General access denied error", was lt. Google & Co. auf falsche Berechtigungen hindeutet. Ich habe jedoch nichts verändert. Und vor dem Neustart lief es ebenfalls. Wenn ich hingegen aber neue VMs anlege funktioniert es problemlos - ... 18 Kommentare, 532 mal gelesen.
Kategorien: Anleitungen

Agaric Collective: Migrating CSV files into Drupal

Drupal News - Sa, 08/17/2019 - 21:38

Today we will learn how to migrate content from a Comma-Separated Values (CSV) file into Drupal. We are going to use the latest version of the Migrate Source CSV module which depends on the third-party library league/csv. We will show how configure the source plugin to read files with or without a header row. We will also talk about a new feature that allows you to use stream wrappers to set the file location. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD CSV source migration whose machine name is ud_migrations_csv_source. It comes with three migrations: udm_csv_source_paragraph, udm_csv_source_image, and udm_csv_source_node.

You can get the Migrate Source CSV module is using composer: composer require drupal/migrate_source_csv. This will also download its dependency: the league/csv library. The example assumes you are using 8.x-3.x branch of the module, which requires composer to be installed. If your Drupal site is not composer-based, you can use the 8.x-2.x branch. Continue reading to learn the difference between the two branches.

Understanding the example set up

This migration will reuse the same configuration from the introduction to paragraph migrations example. Refer to that article for details on the configuration: the destinations will be the same content type, paragraph type, and fields. The source will be changed in today's example, as we use it to explain JSON migration. The end result will again be nodes containing an image and a paragraph with information about someone’s favorite book. The major difference is that we are going to read from JSON.

Note that you can literally swap migration sources without changing any other part of the migration. This is a powerful feature of ETL frameworks like Drupal’s Migrate API. Although possible, the example includes slight changes to demonstrate various plugin configuration options. Also, some machine names had to be changed to avoid conflicts with other examples in the demo repository.

Migrating CSV files with a header row

In any migration project, understanding the source is very important. For CSV migrations, the primary thing to consider is whether or not the file contains a row of headers. Other things to consider are what characters to use as delimiter, enclosure, and escape character. For now, let’s consider the following CSV file whose first row serves as column headers:

unique_id,name,photo_file,book_ref 1,Michele Metts,P01,B10 2,Benjamin Melançon,P02,B20 3,Stefan Freudenberg,P03,B30

This file will be used in the node migration. The four columns are used as follows:

  • unique_id is the unique identifier for each record in this CSV file.
  • name is the name of a person. This will be used as the node title.
  • photo_file is the unique identifier of an image that was created in a separate migration.
  • book_ref is the unique identifier of a book paragraph that was created in a separate migration.

The following snippet shows the configuration of the CSV source plugin for the node migration:

source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_people.csv ids: [unique_id]

The name of the plugin is csv. Then you define the path pointing to the file itself. In this case, the path is relative to the Drupal root. Finally, you specify an ids array of columns names that would uniquely identify each record. As already stated, the unique_id column servers that purpose. Note that there is no need to specify all the columns names from the CSV file. The plugin will automatically make them available. That is the simplest configuration of the CSV source plugin.

The following snippet shows part of the process, destination, and dependencies configuration of the node migration:

process: field_ud_image/target_id: plugin: migration_lookup migration: udm_csv_source_image source: photo_file destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - udm_csv_source_image - udm_csv_source_paragraph optional: []

Note that the source for the setting the image reference is photo_file. In the process pipeline you can directly use any column name that exists in the CSV file. The configuration of the migration lookup plugin and dependencies point to two CSV migrations that come with this example. One is for migrating images and the other for migrating paragraphs.

Migrating CSV files without a header row

Now let’s consider two examples of CSV files that do not have a header row. The following snippets show the example CSV file and source plugin configuration for the paragraph migration:

B10,The definite guide to Drupal 7,Benjamin Melançon et al. B20,Understanding Drupal Views,Carlos Dinarte B30,Understanding Drupal Migrations,Mauricio Dinarte source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_book_paragraph.csv ids: [book_id] header_offset: null fields: - name: book_id - name: book_title - name: 'Book author'

When you do not have a header row, you need to specify two more configuration options. header_offset has to be set to null. fields has to be set to an array where each element represents a column in the CSV file. You include a name for each column following the order in which they appear in the file. The name itself can be arbitrary. If it contained spaces, you need to put quotes (') around it. After that, you set the ids configuration to one or more columns using the names you defined.

In the process section you refer to source columns as usual. You write their name adding quotes if it contained spaces. The following snippet shows how the process section is configured for the paragraph migration:

process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: 'Book author'

The final example will show a slight variation of the previous configuration. The following two snippets show the example CSV file and source plugin configuration for the image migration:

P01,https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg P02,https://agaric.coop/sites/default/files/pictures/picture-3-1421176784.jpg P03,https://agaric.coop/sites/default/files/pictures/picture-2-1421176752.jpg source: plugin: csv path: modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv ids: [photo_id] header_offset: null fields: - name: photo_id label: 'Photo ID' - name: photo_url label: 'Photo URL'

For each column defined in the fields configuration, you can optionally set a label. This is a description used when presenting details about the migration. For example, in the user interface provided by the Migrate Tools module. When defined, you do not use the label to refer to source columns. You keep using the column name. You can see this in the value of the ids configuration.

The following snippet shows part of the process configuration of the image migration:

process: psf_destination_filename: plugin: callback callable: basename source: photo_url CSV file location

When setting the path configuration you have three options to indicate the CSV file location:

  • Use a relative path from the Drupal root. The path should not start with a slash (/). This is the approach used in this demo. For example, modules/custom/my_module/csv_files/example.csv.
  • Use an absolute path pointing to the CSV location in the file system. The path should start with a slash (/). For example, /var/www/drupal/modules/custom/my_module/csv_files/example.csv.
  • Use a stream wrapper. This feature was introduced in the 8.x-3.x branch of the module. Previous versions cannot make use of them.

Being able to use stream wrappers gives you many options for setting the location to the CSV file. For instance:

  • Files located in the public, private, and temporary file systems managed by Drupal. This leverages functionality already available in Drupal core. For example: public://csv_files/example.csv.
  • Files located in profiles, modules, and themes. You can use the System stream wrapper module or apply this core patch to get this functionality. For example, module://my_module/csv_files/example.csv.
  • Files located in remote servers including RSS feeds. You can use the Remote stream wrapper module to get this functionality. For example, https://understanddrupal.com/csv-files/example.csv.
CSV source plugin configuration

The configuration options for the CSV source plugin are very well documented in the source code. They are included here for quick reference:

  • path is required. It contains the path to the CSV file. Starting with the 8.x-3.x branch, stream wrappers are supported.
  • ids is required. It contains an array of column names that uniquely identify each record.
  • header_offset is optional. The index of record to be used as the CSV header and the thereby each record's field name. It defaults to zero (0) because the index is zero-based. For CSV files with no header row the value should be set to null.
  • fields is optional. It contains a nested array of names and labels to use instead of a header row. If set, it will overwrite the column names obtained from header_offset.
  • delimiter is optional. It contains one character column delimiter. It defaults to a comma (,). For example, if your file uses tabs as delimiter, you set this configuration to \t.
  • enclosure is optional. It contains one character used to enclose the column values. Defaults to double quotation marks (").
  • escape is optional. It contains one character used for character escaping in the column values. It defaults to a backslash (****).

Important: The configuration options changed significantly between the 8.x-3.x and 8.x-2.x branches. Refer to this change record for a reference of how to configure the plugin for the 8.x-2.x.

And that is how you can use CSV files as the source of your migrations. Because this is such a common need, it was considered to move the CSV source plugin to Drupal core. The effort is currently on hold and it is unclear if it will materialize during Drupal 8’s lifecycle. The maintainers of the Migrate API are focusing their efforts on other priorities at the moment. You can read this issue to learn about the motivation and context for offering functionality in Drupal core.

Note: The Migrate Spreadsheet module can also be used to migrate data from CSV files. It also supports Microsoft Office Excel and LibreOffice Calc (OpenDocument) files. The module leverages the PhpOffice/PhpSpreadsheet library.

What did you learn in today’s blog post? Have you migrated from CSV files before? Did you know that it is now possible to read files using stream wrappers? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Kategorien: Drupal News

Spinning Code: SC DUG August 2019

Drupal News - Fr, 08/16/2019 - 13:00

After a couple months off, SC DUG met this month with a presentation on super cheap Drupal hosting.

Chris Zietlow from Mindgrub, Will Jackson from Kanopi Studios, and I all gave short talks very cheap ways to host Drupal 8.

Chris opened by talking about using AWS Micro servers. Will shared a solution using a Raspberry Pi for a fully wireless server. I closed the discussion with a review of using Drupal Tome on Netlify.

We all worked from a loose set of rules to help keep us honest and prevent overlapping:

Rules for Cheap D8 Hosting Challenge

The goal is to figure out the cheapest D8 hosting that would actually function for a project, even if it is deeply irresponsible to actually use.

Rules
  1. It has to actually work for D8 (so modern PHP version, working database, etc),
  2. You do not actually have to spend the money, but you do need to know all the steps required to make it work.
  3. It needs to honor the TOS for any networks and services you use (no illegal network taps – legal hidden taps are fair game).
  4. You have to share your idea with the other players so we don’t have two people propose the same solution (first-come-first-serve on ideas).
Reporting

Be prepared to talk for about 5 minutes on how your solution would work.  Your talk needs to include:

  1. Estimated Monthly cost for the first year.
  2. Steps required to make it work.
  3. Known weaknesses.

If you have a super cheap hosting solution for Drupal 8 we’d love to hear about it.

Kategorien: Drupal News

Agaric Collective: Introduction to paragraphs migrations in Drupal

Drupal News - Do, 08/15/2019 - 23:52

Today we will present an introduction to paragraphs migrations in Drupal. The example consists of migrating paragraphs of one type, then connecting the migrated paragraphs to nodes. A separate image migration is included to demonstrate how they are different. At the end, we will talk about behavior that deletes paragraphs when the host entity is deleted. Let’s get started.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD paragraphs migration introduction whose machine name is ud_migrations_paragraph_intro. It comes with three migrations: ud_migrations_paragraph_intro_paragraph, ud_migrations_paragraph_intro_image, and ud_migrations_paragraph_intro_node. One content type, one paragraph type, and four fields will be created when the module is installed.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type, the paragraph type, and the fields are automatically created and deleted.

You can get the Paragraph module is using composer: composer require drupal/paragraphs. This will also download its dependency: the Entity Reference Revisions module. If your Drupal site is not composer-based, you can get the code for both modules manually.

Understanding the example set up

The example code creates one paragraph type named UD book paragraph (ud_book_paragraph). It has two “Text (plain)” fields: Title (field_ud_book_paragraph_title) and Author (field_ud_book_paragraph_author). A new UD Paragraphs (ud_paragraphs) content type is also created. This has two fields: Image (field_ud_image) and Favorite book (field_ud_favorite_book) containing references to images and book paragraphs imported in separate migrations. The words in parenthesis represent the machine names of the different elements.

The paragraph migration

Migrating into a paragraph type is very similar to migrating into a content type. You specify the source, process the fields making any required transformation, and set the destination entity and bundle. The following code snippet shows the source, process, and destination sections:

source: plugin: embedded_data data_rows: - book_id: 'B10' book_title: 'The definite guide to Drupal 7' book_author: 'Benjamin Melançon et al.' - book_id: 'B20' book_title: 'Understanding Drupal Views' book_author: 'Carlos Dinarte' - book_id: 'B30' book_title: 'Understanding Drupal Migrations' book_author: 'Mauricio Dinarte' ids: book_id: type: string process: field_ud_book_paragraph_title: book_title field_ud_book_paragraph_author: book_author destination: plugin: 'entity_reference_revisions:paragraph' default_bundle: ud_book_paragraph

The most important part of a paragraph migration is setting the destination plugin to entity_reference_revisions:paragraph. This plugin is actually provided by the Entity Reference Revisions module. It is very important to note that paragraphs entities are revisioned. This means that when you want to create a reference to them, you need to provide two IDs: target_id and target_revision_id. Regular entity reference fields like files, images, and taxonomy terms only require the target_id. This will be further explained with the node migration.

The other configuration that you can optionally set in the destination section is default_bundle. The value will be the machine name of the paragraph type you are migrating into. You can do this when all the paragraphs for a particular migration definition file will be of the same type. If that is not the case, you can leave out the default_bundle configuration and add a mapping for the type entity property in the process section.

You can execute the paragraph migration with this command: drush migrate:import
ud_migrations_paragraph_intro_paragraph. After running the migration, there is not much you can do to verify that it worked. Contrary to other entities, there is no user interface, available out of the box, that lists all paragraphs in the system. One way to verify if the migration worked is to manually create a View that shows paragraphs. Another way is to query the database directly. You can inspect the tables that store the paragraph fields’ data. In this example, the tables would be:

  • paragraph__field_ud_book_paragraph_author for the current author.
  • paragraph__field_ud_book_paragraph_title for the current title.
  • paragraph_r__8c3a9563ac for all the author revisions.
  • paragraph_r__3fa7e9863a for all the title revisions.

Each of those tables contains information about the bundle (paragraph type), the entity id, the revision id, and the migrated field value. Table names are derived from the machine names of the fields. If they are too long, the field name will be hashed to produce a shorter table name. Having to query the database is not ideal. Unfortunately, the options available to check if a paragraph migration worked are limited at the moment.

The node migration

The node migration will serve as the host for both referenced entities: images and paragraphs. The image migration is very similar to the one explained in a previous article. This time, the focus will be the paragraph migration. Both of them are set as dependencies of the node migration, so they need to be executed in advance. The following snippet shows how the source, destinations, and dependencies are set:

source: plugin: embedded_data data_rows: - unique_id: 1 name: 'Michele Metts' photo_file: 'P01' book_ref: 'B10' - unique_id: 2 name: 'Benjamin Melançon' photo_file: 'P02' book_ref: 'B20' - unique_id: 3 name: 'Stefan Freudenberg' photo_file: 'P03' book_ref: 'B30' ids: unique_id: type: integer destination: plugin: 'entity:node' default_bundle: ud_paragraphs migration_dependencies: required: - ud_migrations_paragraph_intro_image - ud_migrations_paragraph_intro_paragraph optional: []

Note that photo_file and book_ref both contain the unique identifier of records in the image and paragraph migrations, respectively. These can be used with the migration_lookup plugin to map the reference fields in the nodes to be migrated. ud_paragraphs is the machine name of the target content type.

The mapping of the image reference field follows the same pattern than the one explained in the article on migration dependencies. Using the migration_lookup plugin, you indicate which is the migration that should be searched for the images. You also specify which source column contains the unique identifiers that match those in the image migration. This operation will return a single value: the file ID (fid) of the image. This value can be assigned to the target_id subfield of field_ud_image to establish the relationship. The following code snippet shows how to do it:

field_ud_image/target_id: plugin: migration_lookup migration: ud_migrations_paragraph_intro_image source: photo_fileParagraph field mappings

Before diving into the paragraph field mapping, let’s think about what needs to be done. Paragraphs are revisioned entities. To make a reference to them, you need two IDs: their entity id and their entity revision id. These two values need to be assigned to two subfields of the paragraph reference field: target_id and target_revision_id respectively. You have to come up with a process pipeline that complies with this requirement. There are many ways to do it, and the specifics will depend on your field configuration. In this example, the paragraph reference field allows an unlimited number of paragraphs to be associated, but only of one type: ud_book_paragraph. Another thing to note is that even though the field allows you to add as many paragraphs as you want, the example migrates exactly one paragraph.

With those considerations in mind, the mapping of the paragraph field will be a two step process. First, use the migration_lookup plugin to get a reference to the paragraph. Second, use the fetched values to set the paragraph reference subfields. The following code snippet shows how to do it:

pseudo_mbe_book_paragraph: plugin: migration_lookup migration: ud_migrations_paragraph_intro_paragraph source: book_ref field_ud_favorite_book: plugin: sub_process source: - '@pseudo_mbe_book_paragraph' process: target_id: '0' target_revision_id: '1'

The first step is a normal migration_lookup procedure. The important difference is that instead of getting a single value, like with images, the paragraph lookup operation will return an array of two values. The format is like [3, 7] where the 3 represents the entity id and the 7 represents the entity revision id of the paragraph. Note that the array keys are not named. To access those values, you would use the index of the elements starting with zero (0). This will be important later. The returned array is stored in the pseudo_mbe_book_paragraph pseudofield.

The second step is to set the target_id and target_revision_id subfields. In this example, field_ud_favorite_book is the machine name paragraph reference field. Remember that it is configured to accept an arbitrary number of paragraphs, and each will require passing an array of two elements. This means you need to process an array of arrays. To do that, you use the sub_process plugin to iterate over an array of paragraph references. In this example, the structure to iterate over would be like this:

[ [3, 7] ]

Let’s dissect how to do the mapping of the paragraph reference field. The source configuration of the sub_process plugin contains an array of paragraph references. In the example, that array has a single element: the '@pseudo_mbe_book_paragraph' pseudofield. The quotes (') and at sign (@) are required to reuse an element that appears before in the process pipeline. Then, in the process configuration, you set the subfields for the paragraph reference field. It is worth noting that at this point you are iterating over a list of paragraph references, even if that list contains only one element. If you had more than one paragraph to migrate, whatever you defined in process will apply to all of them.

The process configuration is an array of subfield mappings. The left side of the assignment is the name of the subfield you want to set. The right side of the assignment is an array index of the paragraph reference being processed. Remember that this array does not have named-keys, so you use their numerical index to refer to them. The example sets the target_id subfield to the element in the 0 index and the target_revision_id subfield to the element in the one 1 index. Using the example data, this would be target_id: 3 and target_revision_id: 7. The quotes around the numerical indexes are important. If not used, the migration will not find the indexes and the paragraphs will not be associated. The end result of this operation will be something like this:

'field_ud_favorite_book' => array (1) [ array (2) [ 'target_id' => string (1) "3" 'target_revision_id' => string (1) "7" ] ]

There are three ways to run the migrations: manually, executing dependencies, and using tags. The following code snippet shows the three options:

# 1) Manually. $ drush migrate:import ud_migrations_paragraph_intro_image $ drush migrate:import ud_migrations_paragraph_intro_paragraph $ drush migrate:import ud_migrations_paragraph_intro_node # 2) Executing depenpencies. $ drush migrate:import ud_migrations_paragraph_intro_node --execute-dependencies # 3) Using tags. $ drush migrate:import --tag='UD Paragraphs Intro'

And that is one way to map paragraph reference fields. In the end, all you have to do is set the target_id and target_revision_id subfields. The process pipeline that gets you to that point can vary depending on how your paragraphs are configured. The following is a non-exhaustive list of things to consider when migrating paragraphs:

  • How many paragraphs types can be referenced?
  • How many paragraphs instances are being migrated? Is this a multivalue field?
  • Do paragraphs have translations?
  • Do paragraphs have revisions?
Do migrated paragraphs disappear upon node rollback?

Paragraphs migrations are affected by a particular behavior of revisioned entities. If the host entity is deleted, and the paragraphs do not have translations, the whole paragraph gets deleted. That means that deleting a node will make the referenced paragraphs’ data to be removed. How does this affect your migration workflow? If the migration of the host entity is rollback, then the paragraphs will be removed, the migrate API will not know about it. In this example, if you run a migrate status command after rolling back the node migration, you will see that the paragraph migration indicated that there are no pending elements to process. The file migration for the images will report the same, but in that case, the images will remain on the system.

In any migration project, it is common that you do rollback operations to test new field mappings or fix errors. Thus, chances are very high that you will stumble upon this behavior. Thanks to Damien McKenna for helping me understand this behavior and tracking it to the rollback() method of the EntityReferenceRevisions destination plugin. So, what do you do to recover the deleted paragraphs? You have to rollback both migrations: node and paragraph. And then, you have to import the two again. The following snippet shows how to do it:

# 1) Rollback both migrations. $ drush migrate:rollback ud_migrations_paragraph_intro_node $ drush migrate:rollback ud_migrations_paragraph_intro_paragraph # 2) Import both migrations againg. $ drush migrate:import ud_migrations_paragraph_intro_paragraph $ drush migrate:import ud_migrations_paragraph_intro_node

What did you learn in today’s blog post? Have you migrated paragraphs before? If so, what challenges have you found? Did you know paragraph reference fields require two subfields to be set? Did you that deleting the host entity also deletes referenced paragraphs? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Kategorien: Drupal News

Hook 42: Dad Jokes, Development, and Drupal in Denver

Drupal News - Do, 08/15/2019 - 20:47
Dad Jokes, Development, and Drupal in Denver Aimee Degnan Thu, 08/15/2019 - 18:47
Kategorien: Drupal News

Community Working Group posts: Announcing Drupal Event Code of Conduct Training

Drupal News - Do, 08/15/2019 - 19:25

The Drupal Community Working Group is happy to announce that we've teamed up with Otter Tech to offer live, monthly, online Code of Conduct enforcement training for Drupal Event organizers and volunteers through the end of 2019. 

The training is designed to provide "first responder" skills to Drupal community members who take reports of potential Code of Conduct issues at Drupal events, including meetups, camps, conventions, and other gatherings. The workshops will be attended by Code of Conduct enforcement teams from other open source events, which will allow cross-pollination of knowledge with the Drupal community.

Each monthly online workshop is the same; community members only have to attend one monthly workshop of their choice to complete the training.  We strongly encourage all Drupal event organizers to consider sponsoring one or two persons' attendance at this workshop.

The monthly online workshops will be presented by Sage Sharp, Otter Tech's CEO and a diversity and inclusion leader in the open source community. From the official description of the workshop, it will include:

  • Practice taking a report of a potential Code of Conduct violation (an incident report)
  • Practice following up with the reported person
  • Instructor modeling on how to take a report and follow up on a report
  • One practice scenario for a report given at an event
  • One practice scenario for a report given in an online community
  • Discussion on bias, microaggressions, personal conflicts, and false reporting
  • Frameworks for evaluating a response to a report
  • 40 minutes total of Q&A time

In addition, we have received a Drupal Community Cultivation Grant to help defray the cost of the workshop for those that need assistance. The standard cost of the workshop is $350, Otter Tech has worked with us to allow us to provide the workshop for $300. To register for the workshop, first let us know that you're interested by completing this sign-up form - everyone who completes the form will receive a coupon code for $50 off the regular price of the workshop.

For those that require additional assistance, we have a limited number of $100 subsidies available, bringing the workshop price down to $200. Subsidies will be provided based on reported need as well as our goal to make this training opportunity available to all corners of our community. To apply for the subsidy, complete the relevant section on the sign-up form. The deadline for applying for the subsidy is end-of-business on Friday, September 6, 2019 - those selected for the subsidy will be notified after this date (in time for the September 9, 2019 workshop).

The workshops will be held on:

  • September 9 (Monday) at 3 pm to 7 pm U.S. Pacific Time / 8 am to 12 pm Australia Eastern Time
  • October 23 (Wednesday) at 5 am to 9 am U.S. Pacific Time / 2 pm to 6 pm Central European Time
  • November 14 (Thursday) at 6 pm to 10 pm U.S. Pacific Time / 1 pm to 5 pm Australia Eastern Time
  • December 4 (Wednesday) at 9 am to 1 pm U.S. Pacific Time / 6 pm to 10 pm Central European Time

Those that successfully complete the training will be (at their discretion) listed on Drupal.org (in the Drupal Community Workgroup section) as a means to prove that they have completed the training. We feel that moving forward, the Drupal community now has the opportunity to have professionally trained Code of Conduct contacts at the vast majority of our events, once again, leading the way in the open source community.

We are fully aware that the fact that the workshops will be presented in English limit who will be able to attend. We are more than interested in finding additional professional Code of Conduct workshops in other languages. Please contact us if you can assist.
 

Kategorien: Drupal News